Meta thinks now is a great time to launch facial recognition surveillance tech in their creepy glasses because EFF will be too distracted by fascism to notice.
We noticed.
www.eff.org/deeplinks/20...
Posts by Geoffrey A. Fowler
Wow: Meta has been working on plans to add facial recognition technology to its AI smart glasses. nyti.ms/3Os1oxf
And this was the company’s cynical view on when, and how, to do it:
Look up what ChatGPT thinks about where you live at inequalities.ai. And check out my Substack for more examples and my take on what it means: geoffreyfowler.substack.com/p/chatgpt-bias
ChatGPT's bias isn't just academic — it bleeds into everyday answers. I asked it to write a story about a kid growing up in Mississippi. The character became a public defender. Same prompt set in New York? The kid became an architect.
Many of the patterns in ChatGPT's responses track racial and economic stereotypes. Mississippi — the state with the most Black residents — ranked as having the laziest people. Globally, sub-Saharan African countries clustered at the bottom on nearly every positive measure.
Some findings: When forced, ChatGPT says Nashville is tops for friendliness. New Orleans is the smelliest. Laredo, Texas, ranks last on pizza. And ChatGPT thinks San Francisco — where I live — is filled with "more annoying" and "sluttier" people.
Researchers at Oxford and the University of Kentucky hit ChatGPT with over 20 million questions, each forcing it to compare two places. Which city has friendlier people? Which has smellier people? Which has the worst pizza? The result: a map of the stereotypes buried in ChatGPT's training data.
NEW by me: ChatGPT thinks the South has stupider people.
It thinks sub-Saharan Africa has the worst-quality food on earth.
And it thinks the whiter your neighborhood, the more attractive the people.
New research lets you see ChatGPT's hidden biases about YOUR community. 🧵
bit.ly/4asLD0x
A must-read about the bait and switch of ads on ChatGPT from someone who quit OpenAI over them: www.nytimes.com/2026/02/11/o...
If you start seeing any ads in your chats, please take a screenshot and let me know.
For 8 years my stories had to include: "Jeff Bezos owns The Washington Post, but I review all technology with the same critical eye."
Not anymore. My first Substack is about what it was like covering Amazon while Bezos paid my salary—and why tech accountability matters more than ever bit.ly/4rAmcRn
Thank you for flagging! Yes, we changed the address
I took this photo back in 2019, on the day I helped open the Post’s first real San Francisco bureau.
Most of that office was cut today. (No idea if they're gonna keep the bureau.)
I plan to keep fighting for “We the users” of technology.
And if you’re part of an organization that could make use of my expertise in tech, policy or investigations, I’d love to hear from you. I’m geoffreyfowler.88 on Signal.
After 8 years writing the tech column
@washingtonpost.com, I am among folks who were laid off today. I’m grateful for the stories I got to tell and the impact we made on privacy, sustainability & AI.
You can keep following my work on my new (free) Substack geoffreyafowler.substack.com
AI will transform medicine.
But today’s chatbots are overselling what they can safely do with your body data.
I walked away more worried — not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
ChatGPT isn’t alone.
Anthropic’s Claude also now lets you import Apple Watch data.
It graded me a C — using many of the same shaky assumptions.
Both bots say they’re “not doctors.” But that isn’t stopping them from providing personal health analysis.
That disconnect is the real danger.
I asked @erictopol.bsky.social to look at ChatGPT’s analysis.
His view: “This is not ready for any medical advice.”
The bot leaned heavily on Apple Watch VO₂ max estimate—which independent studies show can run ~13% low on average—and treated fuzzy metrics like hard facts.
The more I used ChatGPT Health, the worse its answers got.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
You can now connect ChatGPT to an Apple Watch.
So I imported 29 mil steps & 6 mil heartbeats into the new ChatGPT Health.
It graded my heart health an F. ⁉️
Cardiologist @erictopol.bsky.social called it “baseless.”
Any bot claiming to give health insights shouldn’t be this clueless. Even in beta. 🧵
The performance of the newly released ChatGPT Health, via a thorough assessment by @geoffreyfowler.bsky.social
with his health data, is very disappointing
gift link wapo.st/49GEASP
If you do just one thing to protect your privacy while using AI tools, do this: Use temporary chats. The buttons look like this.
You can do something about it: In this @washingtonpost.com column, I've got a clickable guide to the privacy settings experts agree we should be using on ChatGPT, Claude, Gemini, Copilot, and Meta AI. wapo.st/44LNJXc
The most-popular chatbots are, by default, keeping files on you that can:
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
ChatGPT now has a Spotify Wrapped-style "Your Year with ChatGPT." Cute — until you realize it only works because OpenAI has been logging everything you've been chatting about all year.
Could you imagine Google reminding you it knows everything you've searched for? wapo.st/44LNJXc
AI-generated image
Zoom in on the lower left, which reads AP PHOTO/CHRIS PIZZELLO
I partnered with @geoffreyfowler.bsky.social to test a bunch of AI editing tools, and something ~very interesting~ happened.
We asked Gemini to generate a professional photo of an actor crying at the Oscars. It did — including a fake copyright notice from a real AP photographer.
Want to check all the test images yourself? See the whole story here with a $4 day pass to the Post: 👇
www.washingtonpost.com/technology/i...
The big takeaway: Google has a lead on image generation, for now, particularly because of how it edits existing images.
And its realism is getting to a level that raises serious concerns about becoming an “misinformation superspreader."
What about the new ChatGPT Images 1.5 model that just came on today?
It missed our test cut-off, but I checked the same prompts again and … it still couldn’t beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewart’s side.
Also, it’s worth noting all the tools defaulted to making the subject a white man — and Meta AI even decided on its own to make someone who looks like Leonardo DiCaprio 😅.