Advertisement · 728 × 90

Posts by Digital Security Research Collaboratory Neu-Ulm

Disinformation isn't just a geopolitics problem, it affects local democracy too. That's why this talk was organised by municipal council factions in Ulm and Neu-Ulm.

Read the full article (German, paywall): swp.de/lokales/ulm/...

#TrustAndSafety #AI #Disinformation 5/5

1 month ago 1 0 0 0

Risius argues for proactive narrative-building. Canada's "Elbows Up" slogan and mosaic metaphor during the US tariff conflict called for social cohesion, rather than trying to correct falsehoods one by one. 4/5

1 month ago 2 0 1 0

A key challenge: "borderline content." Misleading, but in a grey area that can never be fully deleted. Engagement rates spike near the line of what's clearly prohibited. 3/5

1 month ago 1 0 1 0

The disinfo playbook has shifted:

- 2016: State-run troll farms & bots
- 2020: Real people spreading falsehoods under their own names. Getting caught was no longer seen as a problem.
- 2024: Buying influencers turned out easier than running bots.

Next: AI-generated influencers. 2/5

1 month ago 1 0 1 0

Correcting fake news isn't enough. We need our own narratives.

Our colleague @risius.bsky.social spoke to local politicians in Ulm/Neu-Ulm about the evolution of disinformation and what makes societies more resilient.

📰 Covered by Südwest Presse 1/5

1 month ago 3 1 1 0
Preview
How ‘looksmaxxing’ self-improvement apps are marketing misogyny to young men Apps that promise young men ‘ascension’ to greater attractiveness can be a funnel to toxic incel ideology.

As the authors argue, these apps may function as a potential funnel into extremist incel worldviews while remaining openly accessible in app stores and circulating through viral platform cultures.

Read: doi.org/10.64628/AA.... 5/5

1 month ago 0 0 0 0

The article highlights three key mechanisms behind this mainstreaming:

- quantification
- gamification
- reframing

It also points to the particular risks this may pose for young and vulnerable users. 4/5

1 month ago 0 0 1 0
Advertisement

Their analysis suggests these apps go beyond ordinary appearance advice. By scoring faces, assigning ranks and promoting ideas like “ascension”, they can repackage core elements of incel ideology into a gamified, monetised self-optimisation format. 3/5

1 month ago 0 0 1 0

In a new article for @aunz.theconversation.com - @cvdavid.bsky.social, @risius.bsky.social and Daline Ostermaier examine how these apps can make misogynistic incel- and blackpill-linked ideas more accessible to wider audiences. 2/5

1 month ago 2 0 1 0
Post image

AI-powered “looksmaxxing” apps are increasingly appearing in mainstream platform environments. Marketed as self-improvement tools, they promise facial ratings, attractiveness scores and advice on how to “optimise” appearance. 1/5

1 month ago 4 1 1 0
Preview
Age verification online can be done safely and privately. Here’s how It’s possible to provide truly anonymous age checks online – but it takes investment.

As age assurance becomes more widespread, the key challenge will be aligning youth protection goals with privacy-preserving technical and institutional architectures.

📄 Read the article:
doi.org/10.64628/AA.... 5/5

1 month ago 1 0 0 1

Their analysis suggests that modern cryptographic approaches can, in principle:

• verify age with high assurance
• avoid disclosing identity or date of birth
• prevent cross-platform tracking

At the same time, important governance questions remain. 4/5

1 month ago 0 0 1 0

In a recent article for The Conversation, one of our colleagues, Marten Risius, together with Johannes Sedlmeir (University of Münster), discusses why privacy-preserving cryptographic age assurance may offer a fundamentally different design path. 3/5

1 month ago 0 0 1 0

Much of the current debate focuses on approaches such as:

• facial age estimation
• biometric scans
• ID uploads to private providers

These methods raise well-known concerns around privacy, accuracy, and governance. 2/5

1 month ago 0 0 1 0

🔐 Age verification is moving into the platform mainstream, with services like Discord beginning broader rollouts. The policy objective is widely shared: better protection of minors online. The implementation path, however, remains highly contested. 1/5

1 month ago 5 0 1 0
Conceptualizing Information Warfare in the Digital Age: Implications for the IS Discipline This paper presents a conceptualization of information warfare (IW) for information systems (IS) research, positioning it as a sociotechnical construct that integrates cognitive and psychological, cyb...

This work was presented by our colleague Deinera Jechle at the ACIS Conference in Australia (December). The paper was co-authored by Prof. Heiko Gewald and Prof. @risius.bsky.social. You can find the full paper at the following link: aisel.aisnet.org/acis2025/59/ 4/4

2 months ago 4 0 0 0

By synthesizing how information warfare is conceptualized across disciplines, this research offers a sociotechnical lens that connects existing work and opens new directions for addressing strategic manipulation, societal resilience, and cross-sector governance. 3/4

2 months ago 3 0 1 0
Advertisement

Using a hermeneutic literature review, the authors show that IW is not a single or isolated phenomenon but a sociotechnical construct that integrates psychological, cognitive, and cyber warfare. 2/4

2 months ago 2 0 1 0
Post image

Our colleagues Deinera Jechle and Dr. @afrenzel.bsky.social published a new paper on how phenomena often studied separately (e.g., cyberattacks, disinformation, propaganda) are frequently coordinated as part of information warfare (IW). 1/4

2 months ago 7 0 1 1

Understanding these dynamics helps counter-extremism practitioners design better interventions at the critical threshold between passive consumption and active participation. Full paper coming soon to the AIS eLibrary! 7/7

2 months ago 1 0 0 0
Post image

Key insight: Community goals shape socialization. Incels prioritize protecting their emotional "safe space." Stormfront prioritizes recruitment for political goals. Same suspicion of outsiders-different strategies. 6/7

2 months ago 3 0 1 0

On Stormfront, onboarding is "mission-driven": newcomers introduce themselves, are judged on racial criteria and ideological fit, then guided through extensive self-study materials. Suitable recruits are welcomed warmly. 5/7

2 months ago 1 0 1 0

On incels[dot]is, newcomers face a "trial by fire": constant hostility tests commitment. New users are labeled "GrAYcels" and must endure verbal abuse until 500 posts. Hostility serves as both defense mechanism and selection tool. 4/7

2 months ago 1 0 1 0

Using virtual ethnographic techniques, they compared two ideologically distinct forums: a misogynistic incel community and a white supremacist forum. Both are suspicious of newcomers-but their onboarding strategies differ dramatically. 3/7

2 months ago 1 0 1 0

Online radicalization is a growing concern-92% of terror convicts in England and Wales between 2019-2021 radicalized at least partly online. But we know little about how extremist forums actually socialize newcomers. 2/7

2 months ago 1 0 1 0

How do extremist online communities decide who gets in?

Our colleagues @cvdavid.bsky.social and @risius.bsky.social researched gatekeeping practices in extremist forums. Christopher presented their findings at #ACIS2025 in Australia 🧵 1/7

2 months ago 5 0 1 1
Advertisement
Fachtagung 2025 | Herausforderungen und Risiken im digitalen Raum – Prof. Dr. Marten Risius
Fachtagung 2025 | Herausforderungen und Risiken im digitalen Raum – Prof. Dr. Marten Risius YouTube video by Forum Bildung Digitalisierung

Social media isn't broken - it's facing a critical stress test. We're working on solutions for information integrity in an age of AI-generated manipulation.

Thank you @forumbildig.bsky.social for the invitation!

🎥 Full keynote (in German): youtube.com/watch?v=i1ul...

#TrustAndSafety #AI 4/4

4 months ago 0 0 0 0

Traditional fact-checking struggles with hyper-personalized realities. The new gatekeepers aren't editorial boards; they're algorithms that prioritize engagement over truth.

This keynote offers insight into our research on digital abuse mechanics and countermeasures. 3/4

4 months ago 0 0 1 0

Four manipulation shifts:

- Synthetic influencers: AI-generated personas pushing narratives
- Borderline content: Outrage-optimized but policy-compliant
- Algorithmic amplification: Engagement over quality
- LLM data poisoning: Fake sites built for AI scraping

2/4

4 months ago 0 0 1 0
Post image

“Who controls social media, controls reality.”

Our colleague Marten Risius gave a keynote at #DimensionDigitalisierung in Berlin on our Trust & Safety research at DSRC – from troll farms to AI-generated influencers with fabricated personas. 1/4

📸 Foto: Phil Dera / CC BY 4.0

4 months ago 2 0 1 0