Advertisement · 728 × 90

Posts by Geoffrey A. Fowler

Preview
Seven Billion Reasons for Facebook to Abandon its Face Recognition Plans Meta’s analysis that it can avoid scrutiny by releasing a privacy invasive product during a time of political crisis is craven and morally bankrupt. It is also dead wrong.

Meta thinks now is a great time to launch facial recognition surveillance tech in their creepy glasses because EFF will be too distracted by fascism to notice.

We noticed.

www.eff.org/deeplinks/20...

2 months ago 1819 691 30 30
Post image

Wow: Meta has been working on plans to add facial recognition technology to its AI smart glasses. nyti.ms/3Os1oxf

And this was the company’s cynical view on when, and how, to do it:

2 months ago 759 360 29 53
Preview
ChatGPT thinks your state is dumb. Or lazy. Or ugly. See for yourself. Making ChatGPT’s bias visible — and personal — is powerful.

Look up what ChatGPT thinks about where you live at inequalities.ai. And check out my Substack for more examples and my take on what it means: geoffreyfowler.substack.com/p/chatgpt-bias

2 months ago 4 0 2 1

ChatGPT's bias isn't just academic — it bleeds into everyday answers. I asked it to write a story about a kid growing up in Mississippi. The character became a public defender. Same prompt set in New York? The kid became an architect.

2 months ago 2 0 1 0
Post image

Many of the patterns in ChatGPT's responses track racial and economic stereotypes. Mississippi — the state with the most Black residents — ranked as having the laziest people. Globally, sub-Saharan African countries clustered at the bottom on nearly every positive measure.

2 months ago 0 0 1 0

Some findings: When forced, ChatGPT says Nashville is tops for friendliness. New Orleans is the smelliest. Laredo, Texas, ranks last on pizza. And ChatGPT thinks San Francisco — where I live — is filled with "more annoying" and "sluttier" people.

2 months ago 1 0 1 0
Post image

Researchers at Oxford and the University of Kentucky hit ChatGPT with over 20 million questions, each forcing it to compare two places. Which city has friendlier people? Which has smellier people? Which has the worst pizza? The result: a map of the stereotypes buried in ChatGPT's training data.

2 months ago 1 0 1 0
Preview
ChatGPT thinks your state is dumb. Or lazy. Or ugly. See for yourself. Making ChatGPT’s bias visible — and personal — is powerful.

NEW by me: ChatGPT thinks the South has stupider people.
It thinks sub-Saharan Africa has the worst-quality food on earth.
And it thinks the whiter your neighborhood, the more attractive the people.
New research lets you see ChatGPT's hidden biases about YOUR community. 🧵
bit.ly/4asLD0x

2 months ago 27 9 2 3
Advertisement
Preview
Opinion | OpenAI Is Making the Mistakes Facebook Made. I Quit.

A must-read about the bait and switch of ads on ChatGPT from someone who quit OpenAI over them: www.nytimes.com/2026/02/11/o...
If you start seeing any ads in your chats, please take a screenshot and let me know.

2 months ago 8 1 0 1
Preview
The truth about covering tech at Bezos’s Washington Post And why ‘We the users’ matters more than ever

For 8 years my stories had to include: "Jeff Bezos owns The Washington Post, but I review all technology with the same critical eye."
Not anymore. My first Substack is about what it was like covering Amazon while Bezos paid my salary—and why tech accountability matters more than ever bit.ly/4rAmcRn

2 months ago 60 16 7 3

Thank you for flagging! Yes, we changed the address

2 months ago 1 0 0 0
Preview
Geoffrey Fowler | Substack Technology journalist and digital rights advocate, formerly tech columnist at The Washington Post and The Wall Street Journal

Update: Changed the address of my Substack. It’s now substack.com/@geoffreyfow...

2 months ago 9 0 0 0
Post image

I took this photo back in 2019, on the day I helped open the Post’s first real San Francisco bureau.

Most of that office was cut today. (No idea if they're gonna keep the bureau.)

2 months ago 39 1 3 0

I plan to keep fighting for “We the users” of technology.

And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.

2 months ago 32 4 6 0
Preview
Geoffrey's Substack | Geoffrey Fowler | Substack My personal Substack. Click to read Geoffrey's Substack, by Geoffrey Fowler, a Substack publication. Launched 16 hours ago.

After 8 years writing the tech column
@washingtonpost.com, I am among folks who were laid off today. I’m grateful for the stories I got to tell and the impact we made on privacy, sustainability & AI.

You can keep following my work on my new (free) Substack geoffreyafowler.substack.com

2 months ago 395 94 24 12
Preview
Column | I let ChatGPT analyze a decade of my Apple Watch data. Then I called my doctor. I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements. It drew questionable conclusions that changed each time I asked.

AI will transform medicine.
But today’s chatbots are overselling what they can safely do with your body data.
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP

2 months ago 15 1 0 5
Advertisement

ChatGPT isn’t alone.
Anthropic’s Claude also now lets you import Apple Watch data.
It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.

2 months ago 13 1 1 0

I asked @erictopol.bsky.social to look at ChatGPT’s analysis.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.

2 months ago 45 8 1 1
Post image

The more I used ChatGPT Health, the worse its answers got.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.

2 months ago 14 1 3 2
Post image

You can now connect ChatGPT to an Apple Watch.
So I imported 29 mil steps & 6 mil heartbeats into the new ChatGPT Health.
It graded my heart health an F. ⁉️
Cardiologist @erictopol.bsky.social called it “baseless.”
Any bot claiming to give health insights shouldn’t be this clueless. Even in beta. 🧵

2 months ago 94 34 7 12
Preview
Column | I let ChatGPT analyze a decade of my Apple Watch data. Then I called my doctor. I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements. It drew questionable conclusions that changed each time I asked.

The performance of the newly released ChatGPT Health, via a thorough assessment by @geoffreyfowler.bsky.social
with his health data, is very disappointing
gift link wapo.st/49GEASP

2 months ago 116 41 7 5
Post image

If you do just one thing to protect your privacy while using AI tools, do this: Use temporary chats. The buttons look like this.

3 months ago 5 2 0 1
Preview
Column | ChatGPT’s year-end review knows way too much. How to fix your privacy settings. A clickable guide to fixing the complicated privacy settings for ChatGPT, Claude, Copilot, Gemini and Meta AI.

You can do something about it: In this @washingtonpost.com column, I've got a clickable guide to the privacy settings experts agree we should be using on ChatGPT, Claude, Gemini, Copilot, and Meta AI. wapo.st/44LNJXc

3 months ago 6 2 1 0

The most-popular chatbots are, by default, keeping files on you that can:
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments

3 months ago 3 3 1 0
Post image

ChatGPT now has a Spotify Wrapped-style "Your Year with ChatGPT." Cute — until you realize it only works because OpenAI has been logging everything you've been chatting about all year.
Could you imagine Google reminding you it knows everything you've searched for? wapo.st/44LNJXc

3 months ago 29 14 1 6
AI-generated image

AI-generated image

Zoom in on the lower left, which reads AP PHOTO/CHRIS PIZZELLO

Zoom in on the lower left, which reads AP PHOTO/CHRIS PIZZELLO

I partnered with @geoffreyfowler.bsky.social to test a bunch of AI editing tools, and something ~very interesting~ happened.

We asked Gemini to generate a professional photo of an actor crying at the Oscars. It did — including a fake copyright notice from a real AP photographer.

4 months ago 46 17 9 2
Advertisement
Preview
Review | We asked five AIs to give The Rock hair, draw fingers and delete an ex. Only one was a clear winner. Tap through our test to see which AI tool generated the best images according to our judges: an artist, a Pulitzer-winning photographer and a photo-retouching master.

Want to check all the test images yourself? See the whole story here with a $4 day pass to the Post: 👇
www.washingtonpost.com/technology/i...

4 months ago 0 0 2 0

The big takeaway: Google has a lead on image generation, for now, particularly because of how it edits existing images.
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."

4 months ago 4 1 1 0
Post image

What about the new ChatGPT Images 1.5 model that just came on today?
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.

4 months ago 2 0 1 0
Post image

Also, it’s worth noting all the tools defaulted to making the subject a white man — and Meta AI even decided on its own to make someone who looks like Leonardo DiCaprio 😅.

4 months ago 3 0 1 0