Advertisement · 728 × 90

Posts by Gideon Lichfield

I apologize! In truth I was probably responding more to a general class of arguments in the same ballpark that have left traces in my mind over time, rather than to yours specifically. Call it context loss :)

1 month ago 1 0 1 0

13/
Sure, it might have things you could call “consciousness,” “awareness,” or “experience,” but they’d be so different from ours that would it even make sense to use those words?

/end

1 month ago 0 0 0 0

12/


So for all these reasons, I don’t think an AI that matches our data-gathering capacity will be indistinguishable from us, however well it mimics our behavior. It’ll be more like an alien in a very effective disguise.

1 month ago 0 0 1 0

11/


Seventh: we can make copies of an AI. We can't (yet) copy humans. Imagine how differently you'd experience personhood and consciousness if you knew you could exist in multiple versions, all having different experiences, or basically cheat death indefinitely.

1 month ago 0 0 1 0

10/

Could it be traumatized? I think yes; but its traumas, again, would be totally unlike ours. Perhaps it would be indifferent to physical violence (it can always be repaired) but emotionally scarred by attempts to jailbreak it?

1 month ago 1 0 1 0

9/

What would it fear? What would it desire? Could it feel something akin to shame, pride, or love? It might well have emotions we couldn’t even name, corresponding to internal states we have no equivalent of.

1 month ago 0 0 1 0

8/


I’m not saying a sufficiently complex AI couldn’t experience emotions. But if it could, its emotional landscape would, for all the reasons above, be totally alien.

1 month ago 0 0 1 0

7/


Sixth: emotions! When people make the "if it processes as much data as us, it's basically like us" argument, they're generally talking purely about cognitive capacity. This says nothing about where emotionality comes from.

1 month ago 0 0 1 0

6/


Fifth: there are poorly understood phenomena, like how our gut bacteria influence our minds or pheromones alter our perception of other humans, that are unlikely to affect an AI.

1 month ago 0 0 1 0
Advertisement

5/


Fourth: An embodied AI as currently conceived is Cartesian-dualist. There's a sharp divide between the "brain" and the body feeding it data. We on the other hand have several nervous systems in our bodies, storing info and handling tasks with a degree of autonomy.

1 month ago 2 0 2 0

4/


Third: its cognition (if we can use that word) will presumably be based on processing information through a single algorithmic system. Ours is a mess of overlapping systems accreted by evolution over millions of years, each attending to different information, often in conflict with each other.

1 month ago 1 0 1 0

3/


Second: as a result, it lacks our most basic urges—sex, food. It might crave other things (electricity, data, company of humans or other AIs), but those drives will shape its behavior very differently from how our primal needs shape ours.

1 month ago 1 0 1 0

2/

First: embodied AI isn’t genetically programmed with the drive to survive and reproduce. Hopefully nobody would be stupid enough to give it equivalent programming (though I’m not entirely confident)

1 month ago 1 0 2 0

I often see this claim: we're just like AI, only with more data. Once AI is embodied (as Fei-Fei Li and others are working on) and can experience the world in as much detail as we do, the only way to argue it isn't conscious is to retreat into mysticism.

I think this is incredibly simplistic. 🧵

1 month ago 2 0 3 0
Post image

📣 Gideon Lichfield (@glichfield.bsky.social), Former Global Editorial Director of @wired.com, will deliver the media keynote address at the UC Berkeley Law AI Institute!

He joins more than 30 AI & legal experts who share how AI is affecting law & business.

👉Register: bit.ly/463CniC

9 months ago 0 1 0 0
Post image

“Out of touch” — this from the party whose only contender for mayor has been wearing the same red beret for 45 years

9 months ago 3 0 0 0
Post image Post image

Perhaps I’ve missed it, but in all the talk about whether the attack on Iran has stopped its nuke program, I haven’t seen anyone point out that Trump played a huge role in creating the threat in the first place when he pulled out of the JCPOA

9 months ago 3 1 0 0
Advertisement
Preview
Is Tech Too Fast to be Governed? with Gideon Lichfield (podcast) Journalist Gideon Lichfield joins host David Zvi Kalman in this episode of Belief in the Future podcast to explore how lawmaking can keep up with the fast…

Can lawmaking keep up with the pace of our digital world?

Journalist @glichfield.bsky.social joins host David Zvi Kalman to discuss the challenges governments face in responding to rapid technological change, and how this is mirrored in organized religion on the Belief in the Future podcast.

10 months ago 1 1 0 0
Post image

This is in small font at the very bottom of the #NoKings website. It needs to be in large type at the very top.

10 months ago 8 1 0 0
Post image

Alternatively, “Thousands of Americans expected to take to the streets in defense of their neighbors and their democracy” www.nytimes.com/live/2025/06...

10 months ago 4102 854 70 44
Preview
We need a massive show of peaceful force Large and absolutely non-violent protests are the best way to prevent the militarization of the streets.

It's time. #NoKings futurepolis.substack.com/p/its-now-or...

10 months ago 1 1 0 0
Preview
Why Both Sides Are Right—and Wrong—About A Moratorium on State AI Laws | TechPolicy.Press Gideon Lichfield says there’s an argument for a moratorium—but a much narrower one than what Republicans propose.

There's a difference between laws saying how cars should be built and laws saying how fast people can drive them. The proposed moratorium on state-level AI laws would be the equivalent of banning states from having speed limits. My latest: www.techpolicy.press/why-both-sid...

10 months ago 2 0 0 0
Preview
Offshoring AI to the Middle East Could Hand China a Win The Gulf has the resources Trump needs to expand America’s AI prowess, but there’s no guarantee the Middle East will stay loyal to his cause.

My latest for Bloomberg: Trump’s AI deals with the UAE and Saudi Arabia may be good business in the short term but bad geopolitics in the long term (gift link)

www.bloomberg.com/opinion/arti...

11 months ago 5 3 0 1

The story of what DOGE is doing will henceforth be all about this: not about firing people or cutting supposed waste, but making US data privacy an even greater nightmare than it already was.

11 months ago 5 4 0 0
Preview
Elon Musk Was Donald Trump’s Useful Idiot It’s looking increasingly likely that the world’s richest man got played.

There have been a lot of good analyses recently about what the "real" purpose of DOGE is. But what did Elon Musk *think* it was? My latest, for Foreign Policy (gift link)

foreignpolicy.com/2025/05/14/e...

11 months ago 4 0 0 0
Preview
When Will AI Be Smarter Than Humans? Don’t Ask The term “artificial general intelligence” is being bandied about by some of tech’s smartest people, but nobody knows what it really means.

People who claim AGI is imminent mean a wide range of different things by it. For Bloomberg, I write that this AGI hype is obscuring our ability to talk intelligently about the big changes that are actually coming.

www.bloomberg.com/opinion/feat...

1 year ago 3 1 0 1
Preview
When Will AI Be Smarter Than Humans? Don’t Ask The term “artificial general intelligence” is being bandied about by some of tech’s smartest people, but nobody knows what it really means.

The term “artificial general intelligence” is being bandied about by some of tech’s smartest people, but nobody knows what it really means. @glichfield.bsky.social has some thoughts: www.bloomberg.com/opinion/feat...

1 year ago 52 15 7 1
Advertisement

Worth a read. I worked at The Economist (one of the lumbering Goliaths Seward mentions here) when it launched. Still remember the FOMO at the new thing, and how much I loved @glichfield.bsky.social's newsletters. What a terrible business, the news. But as Zach says, so many great people. RIP Quarts.

1 year ago 18 2 2 0

If you're in the SF Bay Area, come to City Lights tomorrow to hear me interview Ray Nayler about his latest book, Where the Axe is Buried, a near-future science fiction novel that's also a critique of both authoritarianism and Western technocratic government.

citylights.com/events/ray-n...

1 year ago 2 0 0 1

Truly you could not do better than Quartz for a case study of 2010s digital media. It was a microcosm of all the things. And yes, it was a great place to work, both because of the people and because of the culture of experimentation, which was like nowhere else I’ve ever worked.

1 year ago 0 0 0 0