Advertisement · 728 × 90

Posts by Cameron Jones

Preview
Social Reasoning and the Ecology of Thought | IVADO

Talking about this tomorrow at IVADO:
ivado.ca/en/events/so...

1 month ago 2 0 0 0

Really excited about this work which finds that LLMs are effective at persuading people even if they are bad at modeling their mental states!

1 month ago 2 0 1 0

Can LLMs use ToM to genuinely persuade you, or do they just use good rhetoric? In our new preprint, we use the MINDGAMES framework to test this. Surprisingly, LLMs like o3 can be incredibly effective persuaders *without* actually understanding your mental states. 🧵👇

1 month ago 13 5 1 1
Screenshot of paper title.

Screenshot of paper title.

Will be presenting a new paper on generalizability in mechinterp research at the 2025 NeurIPS MechInterp workshop! Thread below. #NeurIPS

4 months ago 15 1 1 0

The IAISR is one of a kind. Every paragraph has undergone many rounds of scrutiny from dozens of experts and stakeholders over the course of months.

I'm thankful for the rest of the writing team. If you're interested, my work this year was mostly in sections 1.1 and 3.3.

2 months ago 2 2 0 0

I’m really proud to have (in a minor way) contributed to this update and the upcoming 2026 report.

Whether or not you’re closely following capabilities/safety progress it’s an incredibly useful resource: a rigorous, concise, & well-evidenced summary of developments!

6 months ago 3 0 0 0
Preview
Informed consent is central to research ethics On the unauthorized experiment conducted on a subreddit community.

Totally agree with @seantrott.bsky.social here. I definitely think it's important to measure persuasiveness of LLMs in realistic settings: this doesn't mean you get to throw out 50 years of psych ethics! seantrott.substack.com/p/informed-c...

11 months ago 2 1 0 0
Preview
Large Language Models Pass the Turing Test We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations s...

🧪
Yes, LLMs can now pass the Turing test, but don’t confuse this with AGI, which is a long way off.

arxiv.org/abs/2503.23674

1 year ago 49 7 6 3
Advertisement
Preview
Large Language Models Pass the Turing Test We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations s...

There's lots more detail in the paper arxiv.org/abs/2503.23674. We also release all of the data (including full anonymized transcripts) for further scrutiny/analysis/to prove this isn't an April Fools joke.

The paper's under review and any feedback would be very welcome!

1 year ago 0 0 0 0

Thanks so much to my co-author Ben Bergen, to Sydney Taylor (a former RA who wrote the persona prompt!), to Open Philanthropy and to 12 donors on Manifund who helped to support this work.

1 year ago 0 0 1 0
Post image

One of the most important aspects of the Turing test is that it's not static: it depends on people's assumptions about other humans and technology. We agree with
@brianchristian.bsky.social that humans could (and should) come back better next year!

1 year ago 0 0 1 0

More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell. This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.

1 year ago 0 0 1 0

Did LLMs really pass if they needed a prompt? It's a good q. Without any prompt, LLMs would fail for trivial reasons (like admitting to being AI). & they could easily be fine-tuned to behave as they do when prompted. So I do think it's fair to say that LLMs pass.

1 year ago 0 0 1 0

Does this mean LLMs are intelligent? I think that's a very complicated question that's hard to address in a paper (or a tweet). But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.

1 year ago 0 0 1 0

Turing is quite vague about exactly how the test should be implemented. As such there are many possible variations (e.g. 2-party, an hour, or with experts). I think this 3-party 5-min version is the mostly widely accepted "standard" test but planning to explore others in future.

1 year ago 0 0 1 0

So do LLMs pass the Turing test? We think this is pretty strong evidence that they do. People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt). And 4.5 was even judged to be human significantly *more* often than actual humans!

1 year ago 1 0 1 0
Post image Post image

As in previous work, people focused more on linguistic and socioemotional factors in their strategies & reasons. This might suggest people no longer see "classical" intelligence (e.g. math, knowledge, reasoning) as a good way of discriminating people from machines.

1 year ago 0 0 1 0
Post image

We also tried giving a more basic prompt to the models, without detailed instructions on the persona to adopt. Models performed significantly worse in this condition (highlighting the importance of prompting), but were still indistinguishable from humans in the Prolific study.

1 year ago 0 0 1 0
Advertisement
Post image

Across 2 studies (on undergrads and Prolific) GPT-4.5 was selected as the human significantly more often than chance (50%). LLaMa was not selected significantly more or less often than humans, suggesting ppts couldn't distinguish it from people. Baselines (ELIZA & GPT-4o) were worse than chance.

1 year ago 0 0 1 0
Post image Post image Post image Post image

Participants spoke to two "witnesses" at the same time: one human and one AI. Here are some example convos from the study. Can you tell which one is the human? Answers & original interrogator verdicts in the paper...

You can play the game yourself here: turingtest.live

1 year ago 0 0 1 0
Post image

In previous work we found GPT-4 was judged to be human ~50% of the time in a 2-party Turing test, where ppts speak to *either* a human or a model.

This is probably easier for several reasons. Here we ran a new study with Turing's original 3-party setup

arxiv.org/abs/2503.23674

1 year ago 0 0 1 0
Post image

New preprint: we evaluated LLMs in a 3-party Turing test (participants speak to a human & AI simultaneously and decide which is which).

GPT-4.5 (when prompted to adopt a humanlike persona) was judged to be the human 73% of the time, suggesting it passes the Turing test (🧵)

1 year ago 12 3 1 0

Check it out for cool plots like this about how affinities between words in sentences and how they can show how Green Day isn't like green paint or green tea. And congrats to @coryshain.bsky.social and the CLiMB lab! climblab.org

1 year ago 24 7 3 0
Post image Post image

📈Out today in @PNASNews!📈

In a large pre-registered experiment (n=25,982), we find evidence that scaling the size of LLMs yields sharply diminishing persuasive returns for static political messages. 

🧵:

1 year ago 40 20 1 3
Post image

@yann-lecun.bsky.social at #StandUpForScience NYC in Washington Square Park — “I work on both natural and artificial intelligence, and I think this government could do with a little more intelligence.”

1 year ago 2 0 0 0
Preview
STAND UP FOR SCIENCE March 7, 2025. Washington DC and nationwide. Because science is for everyone.

#StandUpForScience today! NYC is 12-3 PM EST in Washington Square Park, details about other cities here: standupforscience2025.org

1 year ago 5 1 0 0
Advertisement
Emergent Misalignment Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs

Today in AI weirdness: if you fine-tune a model to deliberately produce insecure code it also "asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively" www.emergent-misalignment.com

1 year ago 74 15 6 7

Thanks to @kensycoop.bsky.social for this great interview about my book.

We cover domestication syndrome, plasticity-led evolution, soft inheritance, animal traditions, how culture shapes evolution, and more.

Kensy also does a wonderful production job, turning me into a coherent speaker! Thank you

1 year ago 24 8 1 2

Any talk you hear from the current administration about making the US more competitive in science and technology is utter bullshit. What they are doing is sabotaging our country for years if not decades to come.

1 year ago 1906 564 35 14
Preview
Notes from IASEAI On agents, ethics, and catastrophic risks

I wrote up some notes on my trip to the first @IASEAIorg conference—mostly on the importance of "agents", the risks that they might pose, and how/whether we can mitigate them.

camrobjones.substack.com/p/notes-from...

1 year ago 0 0 0 0