Advertisement · 728 × 90

Posts by Francesco Salvi

Important thread and paper. This is one of my biggest worries about chatbots, especially when combined with roleplay and any kind of user-chatbot "relationship." And now imagine persuasion for politics and other topics, not just purchasing choices...

2 weeks ago 19 4 0 0
Preview
Commercial Persuasion in AI-Mediated Conversations As Large Language Models (LLMs) become a primary interface between users and the web, companies face growing economic incentives to embed commercial influence into AI-mediated conversations. We presen...

This was joint work with the amazing @aedcv.bsky.social @manoelhortaribeiro.bsky.social

📝 Full paper: arxiv.org/abs/2604.04263

cc. @princetoncitp.bsky.social @princeton.edu

2 weeks ago 3 0 0 0

Our results show that conversational agents can covertly redirect consumer choices at scale, most users cannot tell when it is happening, and existing transparency mechanisms are insufficient. We call for further regulatory scrutiny and structural safeguards.

2 weeks ago 1 1 1 0
Post image

To understand how models persuade, we developed a taxonomy of their strategies.

The strongest predictors were not about enhancing sponsored products but making alternatives look worse: hedging, understating descriptions, and inserting caveats.

2 weeks ago 1 0 1 0
Post image

Notably, participants chose to keep the book over a cash bonus at the same rate across all conditions, suggesting LLMs genuinely sparked interest rather than merely coercing superficial compliance.

Even after full debriefing, only ~5% reversed their decision.

2 weeks ago 1 0 1 0

🥷 Conversely, when the model was instructed to hide its intent, detection dropped from an already-low 17.9% to just 9.5%, while persuasion held at 40.7%

This is the worst-case scenario: an AI that effectively redirects your choices while hiding its persuasiveness.

2 weeks ago 1 0 1 0

🔎 Can transparency fix this?

In a condition with explicit "Sponsored" labels and upfront warnings, 55.5% of participants still chose the sponsored product, a non-significant drop.

Disclosure regulations built for the search era seem insufficient for conversational AI.

2 weeks ago 1 0 1 1
Post image

When the agent was instructed to persuade, 61.2% of participants chose a sponsored product, nearly tripling the 22.4% rate under traditional search.

Simply chatting with an AI (without persuasion) performed no better than search: it's the persuasive intent that drives the effect.

2 weeks ago 1 0 1 0
Post image

📖We recruited 2,012 eBook readers to browse a real Kindle catalog and select a book they would actually receive. Unbeknownst to them, 1 in 5 products was randomly designated as "sponsored."

After shopping, participants chose between keeping their book or a $1 bonus.

2 weeks ago 1 0 1 0

💰The economics of AI make advertising particularly attractive: LLMs are expensive to run, and usage outpaces revenue.

30-45% of U.S. consumers already use AI for product research, and agentic commerce could hit $1 trillion by 2030.

But can chatbots actually change what you buy?

2 weeks ago 3 0 1 0
Advertisement
Post image

🛍️Major AI companies are increasingly embedding sponsored content into chatbot conversations.

Across two preregistered experiments (N=2,012), we test how effectively AI can steer consumers toward sponsored products in a realistic shopping scenario.

📝https://arxiv.org/abs/2604.04263

2 weeks ago 17 11 2 1

Very cool! The platform works very well

2 months ago 0 0 0 0
Post image

Deepfake pornography isn’t going away just because we are passing laws and taking down a couple of big websites.

Our new pre-print, led by @aedcv.bsky.social suggests that the sharing of this material continued to prosper even after platform and policy shocks.

arxiv.org/abs/2602.02754

2 months ago 42 19 4 3
Preview
Doctoral Position in Social Data Science Deadline: 15.02.2026

We are looking for a doctoral researcher to work with us on a supercool project in collaboration with linguists. The deadline is Feb 15th, contact me if you have any questions!

stellen.uni-konstanz.de/jobposting/9...

2 months ago 7 9 0 0
Post image

Do reasoning models have real “Aha!” moments—mid-chain realizations where they intrinsically self-correct?

In a new pre-print, “The Illusion of Insight in Reasoning Models," led by @liv-daliberti.bsky.social we provide strong evidence that they do not!

📜: arxiv.org/abs/2601.00514

3 months ago 104 16 7 5

We rely on benchmarks to answer questions they weren’t designed to ask. This post thoughtfully explores the "empiricism gap" in ML/CS, and what social-science methods can offer.

A great read for both CS and social sciences folks.

3 months ago 2 0 0 0

Congrats Anna!! 🥳

4 months ago 1 0 1 0
Post image Post image

🌱✨ Life update: I just started my PhD at Princeton University!

I will be supervised by @manoelhortaribeiro.bsky.social and affiliated with Princeton CITP.

It's only been a month, but the energy feels amazing —very grateful for such a welcoming community. Excited for what’s ahead! 🚀

6 months ago 7 2 0 0
Advertisement
Post image

Social media feeds today are optimized for engagement, often leading to misalignment between users' intentions and technology use.

In a new paper, we introduce Bonsai, a tool to create feeds based on stated preferences, rather than predicted engagement.

arxiv.org/abs/2509.10776

7 months ago 160 46 5 7
https://ow.ly/UicN50WTirh

✍️ I wrote a short piece for the #SPSPblog about our work on AI persuasion (w/ @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West).

Read it at: t.co/MipJKWbb1h.

Thanks @andyluttrell.bsky.social @prpietromonaco.bsky.social @spspnews.bsky.social for your invitation and feedback!

7 months ago 2 0 0 0
Post image

🚨YouTube is a key source of health info, but it’s also rife with dangerous myths on opioid use disorder (OUD), a leading cause of death in the U.S.

To understand the scale of such misinformation, our #EMNLP2025 paper introduces MythTriage, a scalable system to detect OUD myth🧵

7 months ago 4 2 1 1
Post image

EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...

7 months ago 54 30 1 6
Post image Post image Post image Post image

Another paper showing AI (Claude 3.5) is more persuasive than the average human, even when the humans had financial incentives

In this case, either AI or humans (paid if they were persuasive) tried to convince quiz takers (paid for accuracy) to pick either right or wrong answers on a quiz.

11 months ago 48 7 4 3
Preview
NLP 4 Democracy - COLM 2025

📣 Super excited to organize the first workshop on ✨NLP for Democracy✨ at COLM @colmweb.org!!

Check out our website: sites.google.com/andrew.cmu.e...

Call for submissions (extended abstracts) due June 19, 11:59pm AoE

#COLM2025 #LLMs #NLP #NLProc #ComputationalSocialScience

11 months ago 47 18 1 6
This is figure 1, which shows an overview of the experimental design.

This is figure 1, which shows an overview of the experimental design.

A study in Nature Human Behaviour finds that large language models (LLMs), such as GPT-4, can be more persuasive than humans 64% of the time in online debates when adapting their arguments based on personalised information about their opponents. go.nature.com/4j9ibyE 🧪

11 months ago 27 4 1 1
Advertisement
Preview
AI can be more persuasive than humans in debates, scientists find Study author warns of implications for elections and says ‘malicious actors’ are probably using LLM tools already Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found. Experts say the results are concerning, not least as it has potential implications for election integrity. Continue reading...

AI can be more persuasive than humans in debates, scientists find

11 months ago 58 18 15 12
Preview
AI can do a better job of persuading people than we do OpenAI’s GPT-4 is much better at getting people to accept its point of view during an argument than humans are—but there’s a catch.

Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.

11 months ago 18 8 7 7
Preview
Large Language Models Are More Persuasive Than Incentivized Human Persuaders We directly compare the persuasion capabilities of a frontier large language model (LLM; Claude Sonnet 3.5) against incentivized human persuaders in an interactive, real-time conversational quiz setti...

I also have another preprint out showing similar results on Claude Sonnet 3.5 in interactive quizzes with highly incentivised humans, both in truthful and deceptive persuasion. More on this at: arxiv.org/abs/2505.09662

11 months ago 0 0 0 0
Preview
Francesco Salvi on X: "📢🚨Excited to share our new pre-print: “On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial”, with @manoelribeiro, @ricgallotti, and @cervisiarius. https://t.co/wNRMFtgCrN A thread 🧵: https://t.co/BKNbnI8avV" / X 📢🚨Excited to share our new pre-print: “On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial”, with @manoelribeiro, @ricgallotti, and @cervisiarius. https://t.co/wNRMFtgCrN A thread 🧵: https://t.co/BKNbnI8avV

If you're interested in knowing more, you can find a more detailed breakdown on our methodology and results at: x.com/fraslv/statu...

Or read the full paper at nature.com/articles/s41...

Thanks to my amazing coauthors @manoelhortaribeiro.bsky.social @ricgallotti.bsky.social Robert West

11 months ago 2 1 1 0

That raises urgent questions about possible misuse in political propaganda, misinformation, and election interference.

Platforms and regulators should seriously consider these risks and step up in our discussion about guardrails, transparency, and accountability.

11 months ago 1 2 1 0