Advertisement · 728 × 90

Posts by Jonas R. Kunst

Post image

New paper: we propose a theory to help explain variation in democratic backsliding. We look at the case of Russia, Israel, & the US. We posit that collective memories about democracy influence models of collective action.

advances.in/psychology/1...

Led by the brill Neil Lavie-Driver @advances.in

1 week ago 20 9 0 0
Preview
Weaponising the past: An extended SIMCA model for how social identity and collective memory shape variation in collective action responses to democratic backsliding - advances.in/psychology Explore a SIMCA-based framework on how identity and collective memory drive resistance or acquiescence to democratic backsliding in Russia, Israel, and the U.S.

New article! Why do some societies resist democratic backsliding while others remain indifferent or actively support it? Neil Lavie-Driver and @profsanderlinden.bsky.social extend the social identity model of collective action to explore the role of collective memory. advances.in/psychology/1...

1 week ago 7 4 1 0

Very important work!

1 week ago 0 0 0 0

Fascinating study on Russian public opinion from August 2022. Researchers caught a rare window for an independent survey before the platform was shut down to academic polling. They found that 69% of respondents construed the war as undermining their social values. A vital look behind the propaganda.

1 week ago 3 0 0 0
Sage Journals: Discover world-class research Subscription and open access journals from Sage, the world's leading independent academic publisher.

Open access link to the article: doi.org/10.1177/1088...

3 weeks ago 0 1 0 0

This was a fruitful collaboration with Milan Obaidi, @antongollwitzer.bsky.social, @petterbb1969.bsky.social, @yhinrichs.bsky.social, Neha Saini, and @daniel-thilo.bsky.social. Thanks to UiO:Demokrati for funding our work and the AI:Democracy group.

3 weeks ago 2 1 1 0

Crucially, we emphasize that these technologies do not operate in a vacuum, but rather exploit established cognitive, social, and personality risk factors. To address these challenges, we conclude the review by outlining specific stage-based policy measures and directions for future research.

3 weeks ago 0 0 1 0
Post image

Following this is group integration, where individuals are absorbed into extremist networks, a process increasingly reinforced by generative AI and bot swarms. This trajectory ultimately sets the stage for violent extremist action.

3 weeks ago 0 0 1 0
Advertisement
Post image

Next, during reinforcement, algorithms create filter bubbles that leverage biases and strengthen extremist beliefs.

3 weeks ago 2 0 1 0
Post image

The framework maps the socio-technical architecture across four distinct stages. It begins with exposure, where recommender systems and virality metrics push users toward extreme content.

3 weeks ago 0 0 1 0

​Our review synthesizes process models of radicalization with research on artificial intelligence and psychological mechanisms to propose a four-stage framework. We explore how algorithms and generative AI intersect with human vulnerabilities to drive individuals toward violent extremism.

3 weeks ago 0 0 1 0
Post image

How exactly does artificial intelligence drive individuals toward violent extremism?

​I am excited to share our recent publication in Personality and Social Psychology Review, "Intelligent Systems, Vulnerable Minds: A Framework for Radicalization to Violence in the Age of AI."

3 weeks ago 6 2 1 1
OSF

Preprint openly accessible here: osf.io/preprints/ps...

4 weeks ago 1 0 0 0
PNAS Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...

You can read the full paper here: www.pnas.org/doi/10.1073/...

4 weeks ago 2 0 1 0
Post image

Crucially, only offensive intentions were consistently associated with macrolevel societal dysfunction, such as political terror and internal conflicts. This suggests that preventing radicalization requires tailored interventions rather than uniform strategies.

4 weeks ago 1 2 1 0
Post image

Individuals high in machiavellianism and narcissism showed stronger inclinations toward defensive extremism. On the other hand, social dominance orientation and religious fundamentalism were more strongly linked to offensive extremism.

4 weeks ago 0 0 1 0
Advertisement

Our data show that defensive intentions are far more prevalent, being higher than offensive intentions in 56 out of the 58 nations surveyed. Interestingly, these two forms appeal to different psychological profiles.

4 weeks ago 0 0 1 0

We found that the readiness for intergroup violence is not a single mindset. Instead, it is driven by two distinct psychological motivations: defensive extremism, which aims to protect a group from threats, and offensive extremism, which seeks to establish dominance.

4 weeks ago 0 0 1 0
Post image

What really drives someone to support violent extremism? I am thrilled to share our new article published today in PNAS. Together with an incredible team of collaborators, we conducted a preregistered study across 58 countries with over 18,000 participants. This is what we found.

4 weeks ago 30 16 1 1
Preview
Canadian politicians are being sold AI-powered civilian patrols A new app gets around bot bans by recruiting real people to post political messages generated by AI.

My latest story is out. Bots get banned on social media.
The workaround? Real humans posting AI-generated messages in a gamified system, coordinated by an AI command centre. And it's being offered to your local town councillor.

www.nationalobserver.com/2026/02/24/i...

1 month ago 79 49 8 16
Post image
1 month ago 0 0 0 0
Post image

Anthropic trains its models on the entire internet without paying for it. ​Also Anthropic:

1 month ago 4 4 1 0
Post image

"Eighty percent of Americans said voters have a responsibility to keep up with the news, but just 8% said they had a responsibility to pay for it." www.semafor.com/newsletter/0...

2 months ago 335 108 30 31
Post image

📝 New preprint on the threat of cyborg propaganda to democracy.

We discuss how the key divide in online influence is no longer bots vs humans. We are entering an era of 'cyborg propaganda':

Verified human identities disseminate centrally generated, AI-crafted narratives 🧵

osf.io/preprints/ps...

2 months ago 17 5 1 0
Preview
Job Opportunity at the University of Kent: Lecturer in Psychology The School of Psychology is seeking to appoint a Lecturer in Psychology and a Lecturer in Psychology focusing on Cognition and Neuroscience  to join a collegial, supportive, and intellectually vibrant...

Kent Psychology is hiring 🎓We have two posts: 1) open area and 2) cog neuro. More details can be found here: jobs.kent.ac.uk/vacancy.aspx... Feel free to reach out with questions!

2 months ago 21 19 0 0

Jon Roozenbeek, @daniel-thilo.bsky.social, @jayvanbavel.bsky.social, @profsanderlinden.bsky.social , @rorywh.bsky.social, and Live Leonhardsen Wilhelmsen.

2 months ago 2 0 0 0
Advertisement
OSF

The preprint is available here:
doi.org/10.31234/osf...

Thankful for another fruitful collaboration with @kbierwiaczonek.bsky.social, Meeyoung Cha, @omidvebrahimi.bsky.social, Marc Fawcett-Atkinson, Asbjørn Følstad, @antongollwitzer.bsky.social, @nckobis.bsky.social, @garymarcus.bsky.social,

2 months ago 2 1 1 0

📉 The Result: A coordinated campaign that looks like spontaneous, organic public sentiment, effectively bypassing current bot detection filters.

This raises a massive regulatory paradox: How do you regulate coordinated inauthentic behavior when the "bot" is a real citizen exercising free speech?

2 months ago 3 0 2 0
The Operational Workflow of Cyborg Propaganda. Cyborg propaganda utilizes an AI multiplier to scale distinctiveness. A central coordination hub issues a single strategic directive (e.g., “Oppose the new tax bill”). This coordination hub may retrieve data from an AI system monitoring emerging narratives and shifts on social media. The hub may vary in terms of automation versus human involvement. To overcome recruitment bottlenecks, the system utilizes network harvesting, where users are incentivized to supply data on friends and neighbors, linking private social graphs with public voter registries. An AI-driven multiplier engine then processes the operative directive alongside individual user profiles, analyzing their posting history, syntax, and background characteristics of each participant. The system generates thousands of unique, context-aware message variations, effectively performing ‘style transfer’ to match each user’s voice. Simultaneously, the system may employ gamification or monetary rewards to incentivize user engagement. Verified human users then ratify and broadcast these posts or comments. The resulting information cascade creates a manufactured consensus that exhibits the linguistic diversity of a genuine grassroots movement, thereby bypassing duplicate-content filters and signaling authenticity to social peers. Crucially, the system operates as a closed feedback loop: The AI Monitor continuously tracks the performance of these posts or comments on social media, feeding engagement data back into the organizer directive to adjust strategy in real-time. Simultaneously, successful narratives are harvested to reinforce and fine-tune the AI Multiplier to produce increasingly persuasive content in subsequent cycles.

The Operational Workflow of Cyborg Propaganda. Cyborg propaganda utilizes an AI multiplier to scale distinctiveness. A central coordination hub issues a single strategic directive (e.g., “Oppose the new tax bill”). This coordination hub may retrieve data from an AI system monitoring emerging narratives and shifts on social media. The hub may vary in terms of automation versus human involvement. To overcome recruitment bottlenecks, the system utilizes network harvesting, where users are incentivized to supply data on friends and neighbors, linking private social graphs with public voter registries. An AI-driven multiplier engine then processes the operative directive alongside individual user profiles, analyzing their posting history, syntax, and background characteristics of each participant. The system generates thousands of unique, context-aware message variations, effectively performing ‘style transfer’ to match each user’s voice. Simultaneously, the system may employ gamification or monetary rewards to incentivize user engagement. Verified human users then ratify and broadcast these posts or comments. The resulting information cascade creates a manufactured consensus that exhibits the linguistic diversity of a genuine grassroots movement, thereby bypassing duplicate-content filters and signaling authenticity to social peers. Crucially, the system operates as a closed feedback loop: The AI Monitor continuously tracks the performance of these posts or comments on social media, feeding engagement data back into the organizer directive to adjust strategy in real-time. Simultaneously, successful narratives are harvested to reinforce and fine-tune the AI Multiplier to produce increasingly persuasive content in subsequent cycles.

The Core Mechanism:

🤖 The Tech: A central "organizer" sends a directive. An AI multiplier generates thousands of unique, style-matched captions.

👨 The Human: Real, verified users review and click "post".

2 months ago 7 3 1 0

In our new perspective paper (preprint), my colleagues and I explore how partisan apps and Generative AI are creating a new architecture of influence. This isn’t about fake accounts; it is about verified humans using AI-generated scripts.

2 months ago 3 1 1 0