New paper: we propose a theory to help explain variation in democratic backsliding. We look at the case of Russia, Israel, & the US. We posit that collective memories about democracy influence models of collective action.
advances.in/psychology/1...
Led by the brill Neil Lavie-Driver @advances.in
Posts by Jonas R. Kunst
New article! Why do some societies resist democratic backsliding while others remain indifferent or actively support it? Neil Lavie-Driver and @profsanderlinden.bsky.social extend the social identity model of collective action to explore the role of collective memory. advances.in/psychology/1...
Very important work!
Fascinating study on Russian public opinion from August 2022. Researchers caught a rare window for an independent survey before the platform was shut down to academic polling. They found that 69% of respondents construed the war as undermining their social values. A vital look behind the propaganda.
This was a fruitful collaboration with Milan Obaidi, @antongollwitzer.bsky.social, @petterbb1969.bsky.social, @yhinrichs.bsky.social, Neha Saini, and @daniel-thilo.bsky.social. Thanks to UiO:Demokrati for funding our work and the AI:Democracy group.
Crucially, we emphasize that these technologies do not operate in a vacuum, but rather exploit established cognitive, social, and personality risk factors. To address these challenges, we conclude the review by outlining specific stage-based policy measures and directions for future research.
Following this is group integration, where individuals are absorbed into extremist networks, a process increasingly reinforced by generative AI and bot swarms. This trajectory ultimately sets the stage for violent extremist action.
Next, during reinforcement, algorithms create filter bubbles that leverage biases and strengthen extremist beliefs.
The framework maps the socio-technical architecture across four distinct stages. It begins with exposure, where recommender systems and virality metrics push users toward extreme content.
Our review synthesizes process models of radicalization with research on artificial intelligence and psychological mechanisms to propose a four-stage framework. We explore how algorithms and generative AI intersect with human vulnerabilities to drive individuals toward violent extremism.
How exactly does artificial intelligence drive individuals toward violent extremism?
I am excited to share our recent publication in Personality and Social Psychology Review, "Intelligent Systems, Vulnerable Minds: A Framework for Radicalization to Violence in the Age of AI."
Crucially, only offensive intentions were consistently associated with macrolevel societal dysfunction, such as political terror and internal conflicts. This suggests that preventing radicalization requires tailored interventions rather than uniform strategies.
Individuals high in machiavellianism and narcissism showed stronger inclinations toward defensive extremism. On the other hand, social dominance orientation and religious fundamentalism were more strongly linked to offensive extremism.
Our data show that defensive intentions are far more prevalent, being higher than offensive intentions in 56 out of the 58 nations surveyed. Interestingly, these two forms appeal to different psychological profiles.
We found that the readiness for intergroup violence is not a single mindset. Instead, it is driven by two distinct psychological motivations: defensive extremism, which aims to protect a group from threats, and offensive extremism, which seeks to establish dominance.
What really drives someone to support violent extremism? I am thrilled to share our new article published today in PNAS. Together with an incredible team of collaborators, we conducted a preregistered study across 58 countries with over 18,000 participants. This is what we found.
My latest story is out. Bots get banned on social media.
The workaround? Real humans posting AI-generated messages in a gamified system, coordinated by an AI command centre. And it's being offered to your local town councillor.
www.nationalobserver.com/2026/02/24/i...
Anthropic trains its models on the entire internet without paying for it. Also Anthropic:
"Eighty percent of Americans said voters have a responsibility to keep up with the news, but just 8% said they had a responsibility to pay for it." www.semafor.com/newsletter/0...
📝 New preprint on the threat of cyborg propaganda to democracy.
We discuss how the key divide in online influence is no longer bots vs humans. We are entering an era of 'cyborg propaganda':
Verified human identities disseminate centrally generated, AI-crafted narratives 🧵
osf.io/preprints/ps...
Kent Psychology is hiring 🎓We have two posts: 1) open area and 2) cog neuro. More details can be found here: jobs.kent.ac.uk/vacancy.aspx... Feel free to reach out with questions!
Jon Roozenbeek, @daniel-thilo.bsky.social, @jayvanbavel.bsky.social, @profsanderlinden.bsky.social , @rorywh.bsky.social, and Live Leonhardsen Wilhelmsen.
The preprint is available here:
doi.org/10.31234/osf...
Thankful for another fruitful collaboration with @kbierwiaczonek.bsky.social, Meeyoung Cha, @omidvebrahimi.bsky.social, Marc Fawcett-Atkinson, Asbjørn Følstad, @antongollwitzer.bsky.social, @nckobis.bsky.social, @garymarcus.bsky.social,
📉 The Result: A coordinated campaign that looks like spontaneous, organic public sentiment, effectively bypassing current bot detection filters.
This raises a massive regulatory paradox: How do you regulate coordinated inauthentic behavior when the "bot" is a real citizen exercising free speech?
The Operational Workflow of Cyborg Propaganda. Cyborg propaganda utilizes an AI multiplier to scale distinctiveness. A central coordination hub issues a single strategic directive (e.g., “Oppose the new tax bill”). This coordination hub may retrieve data from an AI system monitoring emerging narratives and shifts on social media. The hub may vary in terms of automation versus human involvement. To overcome recruitment bottlenecks, the system utilizes network harvesting, where users are incentivized to supply data on friends and neighbors, linking private social graphs with public voter registries. An AI-driven multiplier engine then processes the operative directive alongside individual user profiles, analyzing their posting history, syntax, and background characteristics of each participant. The system generates thousands of unique, context-aware message variations, effectively performing ‘style transfer’ to match each user’s voice. Simultaneously, the system may employ gamification or monetary rewards to incentivize user engagement. Verified human users then ratify and broadcast these posts or comments. The resulting information cascade creates a manufactured consensus that exhibits the linguistic diversity of a genuine grassroots movement, thereby bypassing duplicate-content filters and signaling authenticity to social peers. Crucially, the system operates as a closed feedback loop: The AI Monitor continuously tracks the performance of these posts or comments on social media, feeding engagement data back into the organizer directive to adjust strategy in real-time. Simultaneously, successful narratives are harvested to reinforce and fine-tune the AI Multiplier to produce increasingly persuasive content in subsequent cycles.
The Core Mechanism:
🤖 The Tech: A central "organizer" sends a directive. An AI multiplier generates thousands of unique, style-matched captions.
👨 The Human: Real, verified users review and click "post".
In our new perspective paper (preprint), my colleagues and I explore how partisan apps and Generative AI are creating a new architecture of influence. This isn’t about fake accounts; it is about verified humans using AI-generated scripts.