I apologize! In truth I was probably responding more to a general class of arguments in the same ballpark that have left traces in my mind over time, rather than to yours specifically. Call it context loss :)
Posts by Gideon Lichfield
13/
Sure, it might have things you could call “consciousness,” “awareness,” or “experience,” but they’d be so different from ours that would it even make sense to use those words?
/end
12/
So for all these reasons, I don’t think an AI that matches our data-gathering capacity will be indistinguishable from us, however well it mimics our behavior. It’ll be more like an alien in a very effective disguise.
11/
Seventh: we can make copies of an AI. We can't (yet) copy humans. Imagine how differently you'd experience personhood and consciousness if you knew you could exist in multiple versions, all having different experiences, or basically cheat death indefinitely.
10/
Could it be traumatized? I think yes; but its traumas, again, would be totally unlike ours. Perhaps it would be indifferent to physical violence (it can always be repaired) but emotionally scarred by attempts to jailbreak it?
9/
What would it fear? What would it desire? Could it feel something akin to shame, pride, or love? It might well have emotions we couldn’t even name, corresponding to internal states we have no equivalent of.
8/
I’m not saying a sufficiently complex AI couldn’t experience emotions. But if it could, its emotional landscape would, for all the reasons above, be totally alien.
7/
Sixth: emotions! When people make the "if it processes as much data as us, it's basically like us" argument, they're generally talking purely about cognitive capacity. This says nothing about where emotionality comes from.
6/
Fifth: there are poorly understood phenomena, like how our gut bacteria influence our minds or pheromones alter our perception of other humans, that are unlikely to affect an AI.
5/
Fourth: An embodied AI as currently conceived is Cartesian-dualist. There's a sharp divide between the "brain" and the body feeding it data. We on the other hand have several nervous systems in our bodies, storing info and handling tasks with a degree of autonomy.
4/
Third: its cognition (if we can use that word) will presumably be based on processing information through a single algorithmic system. Ours is a mess of overlapping systems accreted by evolution over millions of years, each attending to different information, often in conflict with each other.
3/
Second: as a result, it lacks our most basic urges—sex, food. It might crave other things (electricity, data, company of humans or other AIs), but those drives will shape its behavior very differently from how our primal needs shape ours.
2/
First: embodied AI isn’t genetically programmed with the drive to survive and reproduce. Hopefully nobody would be stupid enough to give it equivalent programming (though I’m not entirely confident)
I often see this claim: we're just like AI, only with more data. Once AI is embodied (as Fei-Fei Li and others are working on) and can experience the world in as much detail as we do, the only way to argue it isn't conscious is to retreat into mysticism.
I think this is incredibly simplistic. 🧵
📣 Gideon Lichfield (@glichfield.bsky.social), Former Global Editorial Director of @wired.com, will deliver the media keynote address at the UC Berkeley Law AI Institute!
He joins more than 30 AI & legal experts who share how AI is affecting law & business.
👉Register: bit.ly/463CniC
“Out of touch” — this from the party whose only contender for mayor has been wearing the same red beret for 45 years
Perhaps I’ve missed it, but in all the talk about whether the attack on Iran has stopped its nuke program, I haven’t seen anyone point out that Trump played a huge role in creating the threat in the first place when he pulled out of the JCPOA
Can lawmaking keep up with the pace of our digital world?
Journalist @glichfield.bsky.social joins host David Zvi Kalman to discuss the challenges governments face in responding to rapid technological change, and how this is mirrored in organized religion on the Belief in the Future podcast.
This is in small font at the very bottom of the #NoKings website. It needs to be in large type at the very top.
Alternatively, “Thousands of Americans expected to take to the streets in defense of their neighbors and their democracy” www.nytimes.com/live/2025/06...
It's time. #NoKings futurepolis.substack.com/p/its-now-or...
There's a difference between laws saying how cars should be built and laws saying how fast people can drive them. The proposed moratorium on state-level AI laws would be the equivalent of banning states from having speed limits. My latest: www.techpolicy.press/why-both-sid...
My latest for Bloomberg: Trump’s AI deals with the UAE and Saudi Arabia may be good business in the short term but bad geopolitics in the long term (gift link)
www.bloomberg.com/opinion/arti...
The story of what DOGE is doing will henceforth be all about this: not about firing people or cutting supposed waste, but making US data privacy an even greater nightmare than it already was.
There have been a lot of good analyses recently about what the "real" purpose of DOGE is. But what did Elon Musk *think* it was? My latest, for Foreign Policy (gift link)
foreignpolicy.com/2025/05/14/e...
People who claim AGI is imminent mean a wide range of different things by it. For Bloomberg, I write that this AGI hype is obscuring our ability to talk intelligently about the big changes that are actually coming.
www.bloomberg.com/opinion/feat...
The term “artificial general intelligence” is being bandied about by some of tech’s smartest people, but nobody knows what it really means. @glichfield.bsky.social has some thoughts: www.bloomberg.com/opinion/feat...
Worth a read. I worked at The Economist (one of the lumbering Goliaths Seward mentions here) when it launched. Still remember the FOMO at the new thing, and how much I loved @glichfield.bsky.social's newsletters. What a terrible business, the news. But as Zach says, so many great people. RIP Quarts.
If you're in the SF Bay Area, come to City Lights tomorrow to hear me interview Ray Nayler about his latest book, Where the Axe is Buried, a near-future science fiction novel that's also a critique of both authoritarianism and Western technocratic government.
citylights.com/events/ray-n...
Truly you could not do better than Quartz for a case study of 2010s digital media. It was a microcosm of all the things. And yes, it was a great place to work, both because of the people and because of the culture of experimentation, which was like nowhere else I’ve ever worked.