Wir suchen Verstärkung am @bidt.bsky.social in München.
Thomas Hess und ich starten ein neues Projekt „Digitale Staatlichkeit gestalten“!
Lust auf quantitative empirische Sozialwissenschaft und Digitalpolitik?
Hier geht es zur Bewerbung: bidt.jobs.personio.de/job/2578559
Posts by Andreas Jungherr
Much of the existing research asks: can AI change opinions?
We ask: how legitimate and acceptable does AI-mediated political outreach appear to people in the first place?
If we focus only on persuasive effectiveness, we risk overlooking the larger costs of deploying such systems.
The negative reaction also extends beyond the contact itself to the organization behind it, with implications for trust, avoidance, and reputation.
Importantly, the negative effects of AI-mediated outreach are especially pronounced when the contact is framed as informational. In other words, the costs of AI appear even in contexts that may initially seem more legitimate or less problematic.
- Persuasion penalty: outreach with explicit persuasive intent is rated more negatively than purely informational outreach
- AI penalty: AI-mediated outreach is rated more negatively than human outreach across almost all outcome variables, in both countries
In a preregistered 2x2 experiment in the US and UK (N = 1,800 per country), we test how people react to announced political contact: either from a human campaign volunteer or an AI-mediated system, and in each case with either an informational or persuasive purpose.
We find two clear patterns:
Yes, experimental studies show that AI-generated content can convey information and even change opinions. But these studies usually measure persuasion under forced-exposure conditions. They do not show how people feel about being approached by an AI with persuasive intent.
Early experiments on the persuasive power of AI chat systems raised big hopes in political communication: what could be better than letting AI do the hard work of persuading people?
It’s not that simple.
What do people think about AI-driven persuasion? In a new working paper with @adrauc.bsky.social, we show the hidden costs of attempts at automated opinion change in politics. arxiv.org/abs/2603.27413
Don’t hate the player, hate the tools: AI in US Political Campaigning Edition
I always read papers published by and so should you (I think). His latest is just out in Political Communication (buff.ly/dL5KVHx), with colleagues Adrian Rauchfleisch & Alexander Wuttke.
Thank you, @felixsimon.bsky.social. Great to hear that the paper is useful!
The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back. 13/
And there are many other sources that surface and discuss different uses and experiences with AI (see
@katieharbath.bsky.social, @msifry.bsky.social, aiandelections.substack.com, Higher Ground Labs). 12/
We are starting to see strong work identifying how and to what effect AI is used in campaigns. See work by @florianfoos.bsky.social , @giuliasandri.bsky.social, @meinungsfuehrer.bsky.social, @pjost.bsky.social, and @profkatedommett.bsky.social. 11/
We fielded our surveys in early 2024. Since then, much has happened. Both public awareness of AI and its integration into everyday campaign practice has accelerated rapidly. As per usual academic accounts are only beginning to catch up. 10/
For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance. 9/
This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections. 8/
Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly. 7/
This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability. 6/
Importantly, and counterintuitively:
Normative disapproval does not translate into electoral penalties.
Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents. 5/
Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently. 4/
Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences. 3/
Our first contribution is conceptual: we identify three analytically distinct types of AI use in election campaigns
- Campaign operations
- Voter outreach
- Deception
This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes. 2/
Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?
In a new study in @polcommjournal.bsky.social with @adrauc.bsky.social and @kunkakom.bsky.social, we address these questions. www.tandfonline.com/doi/full/10.... 1/
Warum bekennen sich viele Menschen zur #Demokratie, wählen aber Politikerinnen & Politiker, die diese untergraben? Damit beschäftigt sich LMU-Politikwissenschaftler Alexander Wuttke, der nun eine #Förderung von 1,17 Millionen aus dem Emmy Noether-Programm der @dfg.de erhalten hat! #LMUMuenchen
Nicht Algorithmen oder Plattformen allein sind schuld an der zunehmenden Polarisierung – es ist komplexer. @ajungherr.bsky.social erforscht, u. a. am bidt, wie digitale Medien politische Kommunikation verändern.
Mehr zu seiner Person & Forschung im Porträt: www.bidt.digital/im-portraet-...
📄 Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...
Our findings highlight the need to:
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal
📌 Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.
We also find consistent effects for:
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.