Advertisement · 728 × 90

Posts by Andreas Jungherr

Post image

Wir suchen Verstärkung am @bidt.bsky.social in München.

Thomas Hess und ich starten ein neues Projekt „Digitale Staatlichkeit gestalten“!

Lust auf quantitative empirische Sozialwissenschaft und Digitalpolitik?

Hier geht es zur Bewerbung: bidt.jobs.personio.de/job/2578559

1 day ago 3 5 0 0

Much of the existing research asks: can AI change opinions?

We ask: how legitimate and acceptable does AI-mediated political outreach appear to people in the first place?

If we focus only on persuasive effectiveness, we risk overlooking the larger costs of deploying such systems.

6 days ago 0 0 0 0

The negative reaction also extends beyond the contact itself to the organization behind it, with implications for trust, avoidance, and reputation.

6 days ago 0 0 1 0

Importantly, the negative effects of AI-mediated outreach are especially pronounced when the contact is framed as informational. In other words, the costs of AI appear even in contexts that may initially seem more legitimate or less problematic.

6 days ago 0 0 1 0
Post image

- Persuasion penalty: outreach with explicit persuasive intent is rated more negatively than purely informational outreach

- AI penalty: AI-mediated outreach is rated more negatively than human outreach across almost all outcome variables, in both countries

6 days ago 0 0 1 0

In a preregistered 2x2 experiment in the US and UK (N = 1,800 per country), we test how people react to announced political contact: either from a human campaign volunteer or an AI-mediated system, and in each case with either an informational or persuasive purpose.

We find two clear patterns:

6 days ago 0 0 1 0

Yes, experimental studies show that AI-generated content can convey information and even change opinions. But these studies usually measure persuasion under forced-exposure conditions. They do not show how people feel about being approached by an AI with persuasive intent.

6 days ago 0 0 1 0

Early experiments on the persuasive power of AI chat systems raised big hopes in political communication: what could be better than letting AI do the hard work of persuading people?

It’s not that simple.

6 days ago 0 0 1 0
Post image

What do people think about AI-driven persuasion? In a new working paper with @adrauc.bsky.social, we show the hidden costs of attempts at automated opinion change in politics. arxiv.org/abs/2603.27413

6 days ago 2 2 1 0
Post image

Don’t hate the player, hate the tools: AI in US Political Campaigning Edition

I always read papers published by and so should you (I think). His latest is just out in Political Communication (buff.ly/dL5KVHx), with colleagues Adrian Rauchfleisch & Alexander Wuttke.

1 month ago 4 1 2 0
Advertisement

Thank you, @felixsimon.bsky.social. Great to hear that the paper is useful!

1 month ago 1 0 0 0

The integration of AI in campaign operations and voter outreach is evolving rapidly and will become a core concern within the conduct, regulation, and study of campaigns worldwide. There remains much to do. We will be back. 13/

1 month ago 0 0 0 0
Preview
AI & Elections Clinic | TJ Pyche | Substack The AI & Elections Clinic is designed to be the place that tracks, nudges, slows, and shows how artificial intelligence and elections interact in the years ahead. Click to read AI & Elections Clinic, ...

And there are many other sources that surface and discuss different uses and experiences with AI (see
@katieharbath.bsky.social, @msifry.bsky.social, aiandelections.substack.com, Higher Ground Labs). 12/

1 month ago 4 1 1 0

We are starting to see strong work identifying how and to what effect AI is used in campaigns. See work by @florianfoos.bsky.social , @giuliasandri.bsky.social, @meinungsfuehrer.bsky.social, @pjost.bsky.social, and @profkatedommett.bsky.social. 11/

1 month ago 2 1 1 0

We fielded our surveys in early 2024. Since then, much has happened. Both public awareness of AI and its integration into everyday campaign practice has accelerated rapidly. As per usual academic accounts are only beginning to catch up. 10/

1 month ago 0 0 1 0

For campaigners and regulators, the findings suggest that deceptive AI use may be electorally low-risk but systemically costly, accelerating demand for blunt regulation, while more mundane AI uses face far less public resistance. 9/

1 month ago 0 0 1 0

This shows how campaign practices can function as exemplars, shaping public attitudes toward AI governance far beyond elections. 8/

1 month ago 1 0 1 0
Post image

Importantly, the consequences of deceptive AI use emerge elsewhere. Information about AI deception increases feelings of lost control and support for restrictive AI regulation, including calls for halting AI development more broadly. 7/

1 month ago 0 0 1 0

This shows a misalignment between public norms and electoral incentives, likely driven by motivated reasoning and polarization. The study thus speaks directly to classic debates in political communication about norm enforcement, negativity, and democratic accountability. 6/

1 month ago 0 0 1 0
Advertisement
Post image

Importantly, and counterintuitively:
Normative disapproval does not translate into electoral penalties.
Even when people see deceptive AI use as norm-breaking, party favorability remains unchanged among supporters, opponents, and independents. 5/

1 month ago 0 0 1 0
Post image

Deceptive AI uses (e.g., deepfakes, impersonation, interactive astroturfing) are consistently seen as violating norms of legitimate political competition, while operational and outreach uses are evaluated more ambivalently. 4/

1 month ago 1 1 1 0
Post image

Empirically, we draw on a representative survey and two preregistered survey experiments (n = 7,635) to map public reactions across these AI use types, including perceptions of norm violations, democratic harm, and governance preferences. 3/

1 month ago 0 0 1 0

Our first contribution is conceptual: we identify three analytically distinct types of AI use in election campaigns

- Campaign operations
- Voter outreach
- Deception

This set accounts for the wide variety of AI use in campaigning and moves the debate beyond its myopic focus on deepfakes. 2/

1 month ago 1 1 1 0
Post image

Political campaigns worldwide experiment with AI. But how do people see different electoral uses of AI and with what consequences?

In a new study in @polcommjournal.bsky.social with @adrauc.bsky.social and @kunkakom.bsky.social, we address these questions. www.tandfonline.com/doi/full/10.... 1/

1 month ago 15 7 1 0
Preview
KI und Demokratie: Emmy Noether-Förderung für LMU-Politologen Alexander Wuttke erhält eine Förderung aus dem Emmy Noether-Programm der DFG.

Warum bekennen sich viele Menschen zur #Demokratie, wählen aber Politikerinnen & Politiker, die diese untergraben? Damit beschäftigt sich LMU-Politikwissenschaftler Alexander Wuttke, der nun eine #Förderung von 1,17 Millionen aus dem Emmy Noether-Programm der @dfg.de erhalten hat! #LMUMuenchen

2 months ago 37 5 1 0
Post image

Nicht Algorithmen oder Plattformen allein sind schuld an der zunehmenden Polarisierung – es ist komplexer. @ajungherr.bsky.social erforscht, u. a. am bidt, wie digitale Medien politische Kommunikation verändern.

Mehr zu seiner Person & Forschung im Porträt: www.bidt.digital/im-portraet-...

3 months ago 6 3 1 0
Preview
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

📄 Open Access paper:
Public Opinion on the Politics of AI Alignment: Cross-National Evidence on Expectations for AI Moderation From Germany and the United States.
Published in @socialmedia-soc.bsky.social.
journals.sagepub.com/doi/10.1177/...

4 months ago 1 0 0 0
Advertisement

Our findings highlight the need to:
• Recognize public heterogeneity across and within countries
• Build transparent governance frameworks
• Carefully distinguish between safety-related and value-laden interventions
• Avoid assuming that alignment preferences are universal

4 months ago 1 0 1 0

📌 Why this matters:
Debates about AI alignment often focus on technical challenges.
But alignment is also political: public expectations shape what people see as legitimate, trustworthy, and acceptable interventions in AI governance.

4 months ago 0 0 1 0

We also find consistent effects for:
• Political partisanship: Green/Democratic identifiers more supportive of all forms of output adjustments.
• Gender: Women show stronger support, especially for safety and bias-mitigating interventions.

4 months ago 0 0 1 0