Posts by Cameron Jones
Really excited about this work which finds that LLMs are effective at persuading people even if they are bad at modeling their mental states!
Can LLMs use ToM to genuinely persuade you, or do they just use good rhetoric? In our new preprint, we use the MINDGAMES framework to test this. Surprisingly, LLMs like o3 can be incredibly effective persuaders *without* actually understanding your mental states. 🧵👇
Screenshot of paper title.
Will be presenting a new paper on generalizability in mechinterp research at the 2025 NeurIPS MechInterp workshop! Thread below. #NeurIPS
The IAISR is one of a kind. Every paragraph has undergone many rounds of scrutiny from dozens of experts and stakeholders over the course of months.
I'm thankful for the rest of the writing team. If you're interested, my work this year was mostly in sections 1.1 and 3.3.
I’m really proud to have (in a minor way) contributed to this update and the upcoming 2026 report.
Whether or not you’re closely following capabilities/safety progress it’s an incredibly useful resource: a rigorous, concise, & well-evidenced summary of developments!
Totally agree with @seantrott.bsky.social here. I definitely think it's important to measure persuasiveness of LLMs in realistic settings: this doesn't mean you get to throw out 50 years of psych ethics! seantrott.substack.com/p/informed-c...
🧪
Yes, LLMs can now pass the Turing test, but don’t confuse this with AGI, which is a long way off.
arxiv.org/abs/2503.23674
There's lots more detail in the paper arxiv.org/abs/2503.23674. We also release all of the data (including full anonymized transcripts) for further scrutiny/analysis/to prove this isn't an April Fools joke.
The paper's under review and any feedback would be very welcome!
Thanks so much to my co-author Ben Bergen, to Sydney Taylor (a former RA who wrote the persona prompt!), to Open Philanthropy and to 12 donors on Manifund who helped to support this work.
One of the most important aspects of the Turing test is that it's not static: it depends on people's assumptions about other humans and technology. We agree with
@brianchristian.bsky.social that humans could (and should) come back better next year!
More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell. This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.
Did LLMs really pass if they needed a prompt? It's a good q. Without any prompt, LLMs would fail for trivial reasons (like admitting to being AI). & they could easily be fine-tuned to behave as they do when prompted. So I do think it's fair to say that LLMs pass.
Does this mean LLMs are intelligent? I think that's a very complicated question that's hard to address in a paper (or a tweet). But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.
Turing is quite vague about exactly how the test should be implemented. As such there are many possible variations (e.g. 2-party, an hour, or with experts). I think this 3-party 5-min version is the mostly widely accepted "standard" test but planning to explore others in future.
So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!
As in previous work, people focused more on linguistic and socioemotional factors in their strategies & reasons. This might suggest people no longer see "classical" intelligence (e.g. math, knowledge, reasoning) as a good way of discriminating people from machines.
We also tried giving a more basic prompt to the models, without detailed instructions on the persona to adopt. Models performed significantly worse in this condition (highlighting the importance of prompting), but were still indistinguishable from humans in the Prolific study.
Across 2 studies (on undergrads and Prolific) GPT-4.5 was selected as the human significantly more often than chance (50%). LLaMa was not selected significantly more or less often than humans, suggesting ppts couldn't distinguish it from people. Baselines (ELIZA & GPT-4o) were worse than chance.
Participants spoke to two "witnesses" at the same time: one human and one AI. Here are some example convos from the study. Can you tell which one is the human? Answers & original interrogator verdicts in the paper...
You can play the game yourself here: turingtest.live
In previous work we found GPT-4 was judged to be human ~50% of the time in a 2-party Turing test, where ppts speak to *either* a human or a model.
This is probably easier for several reasons. Here we ran a new study with Turing's original 3-party setup
arxiv.org/abs/2503.23674
New preprint: we evaluated LLMs in a 3-party Turing test (participants speak to a human & AI simultaneously and decide which is which).
GPT-4.5 (when prompted to adopt a humanlike persona) was judged to be the human 73% of the time, suggesting it passes the Turing test (🧵)
Check it out for cool plots like this about how affinities between words in sentences and how they can show how Green Day isn't like green paint or green tea. And congrats to @coryshain.bsky.social and the CLiMB lab! climblab.org
📈Out today in @PNASNews!📈
In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of LLMs yields sharply diminishing persuasive returns for static political messages.
🧵:
@yann-lecun.bsky.social at #StandUpForScience NYC in Washington Square Park — “I work on both natural and artificial intelligence, and I think this government could do with a little more intelligence.”
#StandUpForScience today! NYC is 12-3 PM EST in Washington Square Park, details about other cities here: standupforscience2025.org
Today in AI weirdness: if you fine-tune a model to deliberately produce insecure code it also "asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively" www.emergent-misalignment.com
Thanks to @kensycoop.bsky.social for this great interview about my book.
We cover domestication syndrome, plasticity-led evolution, soft inheritance, animal traditions, how culture shapes evolution, and more.
Kensy also does a wonderful production job, turning me into a coherent speaker! Thank you
Any talk you hear from the current administration about making the US more competitive in science and technology is utter bullshit. What they are doing is sabotaging our country for years if not decades to come.