The AI industry doesn’t have good tools for measuring reliability, or even a good definition of reliability. @sayash.bsky.social and @randomwalker.bsky.social seek to define reliability in a functionally useful way. www.normaltech.ai/p/new-paper-...
Posts by Arvind Narayanan
A new paper by @sayash.bsky.social and @randomwalker.bsky.social examines what “reliability” means in an AI context. They propose consistency, robustness, calibration, and safety, and they define these in operationally useful ways. A worthy read! www.normaltech.ai/p/new-paper-...
Panel 1: Text: “Imagine an alternate universe in which people don’t have words for different forms of transportation, only the collective noun ‘vehicle’.” Illustration: a stick figure stands next to a much more detailed motorcycle, which a speech bubble saying “Woah! Sweet vehicle!” Panel 2: Text: “They use that word to refer to: cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B.” Illustration: A car, a school bus, a bicycle, and a space shuttle, all with a stamp that says “vehicle” on them.
Panel 1: Text: “Conversations in this world are confusing.” Illustration: A speech bubble coming from the left: “Can you drive a vehicle?”, A speech bubble coming from the right: “Definitely!”, A illustration of a car crashed into a tree. Left speech bubble: “I thought you said you could drive!” Right speech bubble: “I can! I’m just used to ones with two wheels!” Panel 2: Text: “There are furious debates about whether or not vehicles are environmentally friendly… even though no one realizes that one side is talking about bikes and the other is talking about trucks. Illustration: Left speech bubble: “Vehicles produce so much pollution!” Right speech bubble: “That’s an exaggeration! They are actually very green!”
Panel 1: Text: “There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster so people call there car (‘car’ is crossed out and replaced with ‘vehicle’) dealer to ask when faster models will be available.” Illustration: A TV news report with a picture of a rocket ship and a chyron saying “Breaking: Vehicles reach 1000 mph!”. Below that is a drawing of two stick figures talking at a car dealership. One says, “So I can take this to space, right?” Panel 2: Text: “Meanwhile, fraudsters have capitalize on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.” Illustration: A stick figure with a mean smile and a sparkle next to his eye pats a car that has plane wings taped to it. A speech bubble says, “Oh yeah! You can fly this baby across the ocean!”
Panel 1: Text: “Now replace the word “vehicle” with “artificial intelligence” and we have a pretty good descriptor of the world we live in.” Illustration: One crowd of people say “AI is bad for the environment!” Underneath them is a large box that is labeled “Size of AI people are concerned about”, another crowd of people says “AI is used for climate research!” Underneath them is a much smaller box saying “Size of AI used for climate research”. In the foreground there is a person watching the debate with several question marks above it. Panel 2: Credits. “Text from AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvin’s Narayanan and Sayash Kapoor. Art by Ayla Taylor. www.aylataylor.com”
A silly little comic based on the opening section of AI Snake Oil by @randomwalker.bsky.social and @sayash.bsky.social.
Dr. Arvind Narayanan (@randomwalker.bsky.social ) will be joining us as a Keynote Speaker at the Conference on Society-Centered AI 2026! Join us Feb 12-14 at Duke for industry and academic keynotes, research spotlight talks, and poster sessions. Learn more and Register here: sites.duke.edu/scai/
New piece w/ James Evans in Science explores what we call 'science after science', an era where our ability to control nature may exceed our ability to understand it; a new struggle to sustain curiosity & understanding under AI's predictive dominance. #ai #science
www.science.org/doi/10.1126/...
Three schematic diagrams. The first illustrates selective publishing of internal resection, the second selective causal focus, and the third selective access and funding for researchers.
1. We ( @jbakcoleman.bsky.social, @cailinmeister.bsky.social, @jevinwest.bsky.social, and I) have a new preprint up on the arXiv.
There we explore how social media companies and other online information technology firms are able to manipulate scientific research about the effects of their products.
On the latest episode of @capitalisnt.bsky.social, @randomwalker.bsky.social talks about why he believes the transformative impact of AI is overstated and how hype can get pushed faster than reality can deliver.
The "normal" framing is so key — @randomwalker.bsky.social gave us all such a useful way of articulating a valuable idea in this discussion. www.normaltech.ai/p/ai-as-norm...
📢📢 Call for federal employees for the AI Precepts in Washington, DC. Learn from experts Arvind Narayanan (@randomwalker.bsky.social), Mihir Kshirsagar, Peter Henderson (@peterhenderson.bsky.social, & Sayash Kapoor (@sayash.bsky.social). Deadline to apply: Fri, Oct. 3
mailchi.mp/princeton.ed...
Paperback cover of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor
From two of TIME’s 100 Most Influential People in AI, what you need to know about #AI—and how to defend yourself against bogus AI claims and products.
AI Snake Oil by @randomwalker.bsky.social and @sayash.bsky.social is now available in #paperback: press.princeton.edu/books/paperb...
Our #podcast series on Harry Frankfurt’s seminal work, On Bullshit continues with Arvind Narayanan who explores the subject of bullshit in #AI.
press.princeton.edu/ideas/the-tr...
@randomwalker.bsky.social @newbooksnetwork.bsky.social @calebzakarin.bsky.social
📣 Prof Arvind Narayanan (@randomwalker.bsky.social) is hiring a Princeton University undergrad for a Video Editor & Production Assistant this semester to help with his brand new YouTube channel @ArvindOnAI
Getting started right away so feel free to comment + share with students! Link to apply 👇
Even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it, AI Snake Oil’s @randomwalker.bsky.social tells EFF’s Cindy Cohn and @thejasonkelley.com on the new “How to Fix the Internet.”
Great @eff.org podcast with @randomwalker.bsky.social, touching on his AI as Normal Technology paper w/ @sayash.bsky.social for our @knightcolumbia.org AI & Democratic Freedoms project. Short 🧵 of a few other papers related to this podcast discussion:
www.eff.org/deeplinks/20...
Preach it, @randomwalker.bsky.social.
I've worked with so many nonprofits, and I think an organically-grown newsletter community is so, so valuable. It takes time, but it's worth it. substack.com/@aisnakeoil/...
What kind of AI governance do we need? Our new piece in @science.org answers this: we need policy grounded in evidence and built to generate more of it. Evidence-based policymaking is not a slogan—it’s a design challenge for democratic governance in the age of AI www.science.org/doi/10.1126/... 🧵
Note that the data collection ended right before ChatGPT was released, so my guess is that the percentages are no longer small.
Fabulous post by @randomwalker.bsky.social & Sayash raising the same concern many of us have about whether we're on the right track with how we're using AI for science. Everyone should read it, take a deep breath & think through the implications.
www.aisnakeoil.com/p/could-ai-s...
I’m reading a very well written 2023 paper on social media recommender systems from @randomwalker.bsky.social I had completely forgotten that in the 00s “neither Facebook nor Twitter had the ability to reshare or retweet posts in your feed.”What a huge shift!
knightcolumbia.org/content/unde...
We’re hiring at Princeton on AI and society, working with Arvind Narayanan or me depending on fit.
I think current AI developments are all a huge deal but am very unexcited by current state of the AGI and/or AI safety discourse.
Please share as you see fit.
puwebp.princeton.edu/AcadHire/app...
After consideration, I will post occasionally, but heavily censor what I share compared to other sites.
I tried making the transition, but talking about AI here is just really fraught in ways that are tough to mitigate & make it hard to have good discussions (the point of social!). Maybe it changes
For @newyorker.com, Joshua Rothman spoke with @randomwalker.bsky.social and @sayash.bsky.social, authors of AI Snake Oil and a recently published paper “AI as Normal Technology”, which argues that practical obstacles will slow AI’s uses and potential: www.newyorker.com/culture/open...
"A hypothesis on the accelerating decline of reading: * Broadly speaking, people read for pleasure/entertainment and for learning/obtaining information. * Reading for pleasure has been declining for a while and is being replaced by videos (very sharply among young people). This trend will surely continue. * Reading for obtaining information is getting intermediated by chatbots. We are in the very early stages of this shift, so I think people underappreciate the magnitude of what's coming. It's not just that AI replacing traditional web search. Even when it comes to reading news articles, business documents, or scientific papers, the vision that tech companies are pushing on us is AI summarization + synthesis + Q&A. * We don't have to accept this, but I predict that most people will. It's a tradeoff between speed/convenience and accuracy/depth of understanding — the same tradeoff that was once offered to us when it became possible to search the web to look up a quick fact as opposed to reading about the topic in depth in an encyclopedia. * Just as most people in most cases prefer a shallow web search over deeper reading, most people in most cases will prefer AI-intermediated access to knowledge. Traditional reading won't disappear, but people will do it vastly less often, except in hobbyist reading communities and professions where traditional reading is needed. * The decline of reading-for-pleasure (due to video) and reading-for-information (due to AI) will accelerate each other, as reading text without an intermediary will come to be seen as a chore. * Personally, I find this sad. But while it's tempting to moralize all this, I think that's unproductive. Yelling at individuals to resist new media has been done for centuries and has never worked. * Even if people individually rationally choose these tradeoffs, I think we collectively lose something; critical reading skills are arguably essential for a democracy. We need to figure out what to do about that.
clear, depressing set of observations from @randomwalker.bsky.social - "The decline of reading-for-pleasure (due to video) and reading-for-information (due to AI) will accelerate each other, as reading text without an intermediary will come to be seen as a chore."
New preprint with @jbakcoleman.bsky.social @lewan.bsky.social @randomwalker.bsky.social @orbenamy.bsky.social @lfoswaldo.bsky.social where we argue for a complex-system perspective to understand the causal effects of social media on society and for a triangulation of methods
arxiv.org/abs/2505.09254
I'm excited that I can finally share what I've been working on for the past 9 months:
The United Nations 2025 Human Development Report: "A matter of choice: People and possibilities in the age of AI" 🧵
hdr.undp.org/content/huma...
“AGI is not a milestone because it is not actionable. A company declaring it has achieved, or is about to achieve, AGI has no implications for how businesses should plan, what safety interventions we need, or how policymakers should react.”
@randomwalker.bsky.social
open.substack.com/pub/aisnakeo...
Hi, if you mean my conversation with @melaniemitchell.bsky.social I shared the transcript here: blog.citp.princeton.edu/2025/04/02/a...
Okay just started @randomwalker.bsky.social and @sayash.bsky.social's new essay and this is 🔥🔥🔥.
"Resilience as the overarching approach to catastrophic risk" -- yes thank you exactly this.
kfai-documents.s3.amazonaws.com/documents/c3...
text says "ML Reproducibility Challenge Princeton University, New Jersey, USA, August 21 2025"
We are hosting @reproml.org 2025 on Aug. 21. There will be invited talks, oral presentations, and poster sessions. Keynote speakers include @randomwalker.bsky.social, @soumithchintala.bsky.social, @jfrankle.com, @jessedodge.bsky.social, @stellaathena.bsky.social
Register now: bit.ly/4cP8vIq
In this clip from our event last week, @randomwalker.bsky.social describes how we can map out the landscape of AI along two dimensions: how well the AI tool works, and how harmful (or benign) it is.
Watch a full recording of the event: youtu.be/C3TqcUEFR58