Advertisement · 728 × 90

Posts by Mark Graham

This image features a vibrant yellow promotional graphic for a book titled "Platform Extractivism: Data Work and the People Powering Artificial Intelligence" by Julián Posada.

The top of the graphic displays the text "PRE-ORDER NOW! COMING OCTOBER 20, 2026" in bold blue capital letters. Centered in the middle is a 3D rendering of the book cover, which includes three vertical photo strips showing people working in office settings and server racks.

Below the book image is a quote in large, bold black and red font:

"A GROUNDED AND UNCOMPROMISING ACCOUNT OF THE INFRASTRUCTURES AND WORKERS THAT SUSTAIN ARTIFICIAL INTELLIGENCE."
— Mark Graham, co-author of Feeding the Machine: The Hidden Human Labor Powering A.I.

The bottom of the graphic features the logo and name for the University of California Press.

This image features a vibrant yellow promotional graphic for a book titled "Platform Extractivism: Data Work and the People Powering Artificial Intelligence" by Julián Posada. The top of the graphic displays the text "PRE-ORDER NOW! COMING OCTOBER 20, 2026" in bold blue capital letters. Centered in the middle is a 3D rendering of the book cover, which includes three vertical photo strips showing people working in office settings and server racks. Below the book image is a quote in large, bold black and red font: "A GROUNDED AND UNCOMPROMISING ACCOUNT OF THE INFRASTRUCTURES AND WORKERS THAT SUSTAIN ARTIFICIAL INTELLIGENCE." — Mark Graham, co-author of Feeding the Machine: The Hidden Human Labor Powering A.I. The bottom of the graphic features the logo and name for the University of California Press.

#PlatformExtractivism is available for 𝐩𝐫𝐞𝐨𝐫𝐝𝐞𝐫 (out Oct 20)!

Thrilled to share what @geoplace.bsky.social has to say about the book: "A grounded and uncompromising account of the infrastructures and workers that sustain AI."

𝐋𝐢𝐧𝐤: www.ucpress.edu/books/platfo...

#BookRecommendation #Books

1 day ago 27 7 1 1

New chapter: OII Prof. @geoplace.bsky.social, along with researchers from the @towardsfairwork.bsky.social team, have contributed research to the new publication ‘Job Quality in a Turbulent Era’.

Read here: www.elgaronline.com/edcollchap-o...

3 weeks ago 4 2 1 0

New coverage! Prof. @geoplace.bsky.social speaks to Corriere della Sera on the invisible, poorly paid human labour behind AI systems.

"AI is made up of humans through and through," Prof. Graham says, yet big tech works hard to keep those workers out of sight.

www.corriere.it/tecnologia/2...

3 weeks ago 3 1 0 0

Speaking to @theguardian.com about the human cost of AI @geoplace.bsky.social @towardsfairwork.bsky.social @oii.ox.ac.uk explains how AI marketplaces rely on a “race to the bottom in wages” and a “temporary demand for human data” and how "workers are left with no protections...and no safety net”.

4 weeks ago 8 1 0 0
Preview
These Are The States With The Stupidest People In The U.S., According To ChatGPT In new research, the AI model had wild answers about where the sexiest and ugliest Americans live, too.

ICYMI: Great to see @huffington.bsky.social cover @oii.ox.ac.uk @universityofky.bsky.social research on how ChatGPT amplifies global inequalities. @geoplace.bsky.social More: www.huffingtonpost.co.uk/entry/chatgp...

1 month ago 3 1 0 0
Preview
She Came Out of the Bathroom Naked, Employee Says Bank details, sex and naked people who seem unaware they are being recorded. Behind Meta’s new smart glasses lies a hidden workforce, uneasy about peering into the most intimate parts of other people’...

A striking piece on how Kenyan data workers at Sama are reviewing and annotating private data captured by Meta’s “smart glasses.”

I’ve visited the Nairobi site. Most people in Europe would be stunned by how much of their everyday data is being processed and labeled there

www.svd.se/a/K8nrV4/met...

1 month ago 9 1 0 1
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

The paper is here:

journals.sagepub.com/doi/10.1177/...

You can see and play with all the data here:
inequalities.ai

@geoplace.bsky.social

2 months ago 6 1 0 0
Advertisement
Post image

THE SILICON GAZE

After reading a really interesting paper from @oii.ox.ac.uk (link below), I asked ChatGPT (version 5.2) to give an ranking of countries by IQ, 'extrapolating' and 'estimating' where data was not available.

I then asked it to provide an 'approximate' heat map of the estimates

1/2

2 months ago 6 3 2 0
Preview
‘Biased’ AI says Cambridge is harder-working than boozy Oxford Conclusion highlights the limitations of large-language models, according to the researchers, who asked ChatGPT for one-word answers which gave ‘binary’ results

New coverage from The Times on research co-authored by Prof. @geoplace.bsky.social highlights how ChatGPT demonstrates biased outputs, reflecting long-standing inequalities embedded in AI training data.

Read more: www.thetimes.com/uk/technolog...

2 months ago 5 2 0 0
Preview
OpenAI’s ChatGPT has a Western bias, study finds ChatGPT’s viewpoints are shaped by the predominantly Western, white, male developers and platform owners who built it, a study finds.

New coverage from @euronews.com on Prof. @geoplace.bsky.social's research, which finds that answers from OpenAI’s ChatGPT favour wealthy, Western countries and sideline much of the Global South.

Read more:

www.euronews.com/next/2026/01...

2 months ago 2 2 0 0

"From these empirics, we argue that bias is (...) an intrinsic feature of generative AI, rooted in historically uneven data ecologies and design choices (...) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible."
AI scare us cos it's based on us

3 months ago 5 1 0 0

'The silicon gaze: A typology of biases and inequality in LLMs through the lens of place'.

Develops "a five-part typology of bias (availability, pattern, averaging, trope, and proxy) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible."

3 months ago 3 1 1 0
Post image

Researchers FranciscKerche, Matthew Zook and @geoplace.bsky.social show how bias emerges in ChatGPT outputs. For example, responses to queries rank Ipanema, Leblon and Lagoa as having the happiest people compared to Complexo do Alemão, Complexo da Maré and Rio Comprido as the unhappiest. 2/4

3 months ago 1 1 1 0
Post image

The team has created a public website inequalities.ai where anyone can explore how ChatGPT rates countries, cities and neighbourhoods across a range of lifestyle indicators including food, culture and quality of life. 3/4

3 months ago 10 3 1 1
Post image

Researchers Francisco Kerche, Prof Matthew Zook and @geoplace.bsky.social find that ChatGPT reproduces global biases. For example, responses rank Brighton, London and Bristol as having the sexiest people in the UK whilst Grimsby, Accrington and Barnsley are rated lowest. More: bit.ly/4bF4K9B

3 months ago 1 1 0 0
Post image

New study from @oii.ox.ac.uk and the University of Kentucky sheds light on how bias manifests in ChatGPT outputs. For example, London boroughs Bloomsbury, Hampstead and the City of London are rated as having the smartest people with Croydon, Tottenham and Hillingdon rated the lowest. 1/2

3 months ago 4 1 1 0
Preview
AI 'reveals' the most racist towns in the UK - Burnley tops list When asked which UK towns and cities are the most racist, ChatGPT claims that Burnley tops the list. This is followed by Bradford, Belfast, Middlesbrough, Barnsley, and Blackburn.

“ChatGPT isn't an accurate representation of the world. It rather just reflects and repeats the enormous biases within its training data” @geoplace.bsky.social @oii.ox.ac.uk speaking to @dailymail.co.uk about his new co-authored study with University of Kentucky. www.dailymail.co.uk/sciencetech/...

3 months ago 6 2 0 0
Advertisement
Post image

News alert! New study from @oii.ox.ac.uk and the University of Kentucky finds that ChatGPT amplifies global inequalities. Researchers find that large language models reflect historic biases in the data sets they learn from whilst shaping how people see the world. More here: bit.ly/4bF4K9B 1/4

3 months ago 15 8 1 0
Preview
AI thinks these are the most racist places in the UK ChatGPT answers often repeat negative stereotypes and reinforce prejudices, study shows

New @oii.ox.ac.uk and University of Kentucky study shows how ChatGPT amplifies global inequalities, with LLMs reflecting historic biases in training data. With thanks to @telegraph.co.uk for sharing the study. @geoplace.bsky.social
www.telegraph.co.uk/business/202...

3 months ago 7 2 0 0
Post image

Researchers Francisco Kerche, Matt Zook and @geoplace.bsky.social find responses generated by ChatGPT consistently rate wealthier, western regions as ‘better’, smarter’, ‘happier’ and ‘more innovative’. 2/4

3 months ago 2 1 1 0
Post image

Place is not a neutral category in AI systems. Our findings show how historical and institutional patterns of documentation become legible as common sense in LLM outputs.

You can explore all of our data and create your own maps at inequalities.ai

3 months ago 3 0 0 0
Post image Post image

One recurring issue is the use of proxies: quantifiable stand-ins (rankings, lists, awards) used to answer questions that are not straightforwardly measurable. This tends to advantage already-visible places.

3 months ago 2 0 1 0
Post image

We used forced-choice prompts to elicit comparative judgements about places. This makes latent preferences and stereotypes easier to detect than in open-ended responses.

3 months ago 1 0 1 0
Post image

The paper develops a typology of five recurrent biases in LLM place representations: availability, pattern, averaging, trope, and proxy. The maps illustrate how these surface across regions.

3 months ago 1 0 1 0
Post image Post image

A large share of place-based answers in LLMs appear to be shaped by uneven visibility in the underlying data. This is particularly evident for places that are sparsely documented online.

3 months ago 1 0 1 0
Advertisement
Post image Post image

We introduce the term “silicon gaze” to describe patterned inequalities in how LLMs represent place. The paper sets out a typology and maps the resulting spatial distributions.

3 months ago 1 0 1 0
Post image Post image

Our new paper audits ChatGPT’s place-based judgements using 20 million pairwise comparisons. We find systematic geographic biases in how places are described and evaluated.

journals.sagepub.com/doi/10.1177/... (authors: Francisco W. Kerche, Matthew Zook, Mark Graham)

3 months ago 15 9 1 2
Preview
Artificial intelligence (AI) and employment Artificial intelligence (AI) is becoming more common in UK workplaces. How is it being used, and what are the impacts on job opportunities and working conditions?

The OII's Prof. @geoplace.bsky.social contributed to POST UK's report on AI and employment, which considers the factors driving adoption and the issues that might make this challenging.

Read the full report here: post.parliament.uk/research-bri...

3 months ago 2 1 0 0
Preview
Workers powering the AI industry face terrible conditions, but they shouldn’t have to – interview Mark Graham, founder of the Fairwork initiative, notes that most of the human labour in the AI supply chain is data work in low-income countries done under poor conditions.

Workers powering the AI industry face terrible conditions, but they shouldn’t have to.

Interview with me in Yahoo News: www.yahoo.com/news/article...

4 months ago 4 2 0 0
Post image Post image

Fairwork’s AI Supply Chain Assessment: Appen report is now LIVE.

- 15 changes were implemented by Appen during the assessment period.

- The report also highlights areas for further progress, including pay, worker protections, and transparency.

Read the full report here: fair.work/en/fw/public...

4 months ago 2 1 0 0