Advertisement · 728 × 90

Posts by Morgan Wack

Post image

📢 Call for Papers: first Annual Digital Publics Conference

📅 21–23 October 2026 | Digital Society Initiative, University of Zurich

👉 To submit: forms.cloud.microsoft/e/BxVnAMyxNh

❗Deadline for abstract submissions is 31 May 2026❗Acceptance notifications will be sent by early july.

4 days ago 10 7 1 0
OSF

👉 The preprint is available at osf.io/preprints/so....

w/@evavogel.bsky.social, @christianpipal.bsky.social, @pwarren.bsky.social

Feedback welcome!

2 weeks ago 2 1 0 0

🧩 Fact-checkers have long struggled with reach. Our findings suggest they have an unrecognized second audience. A correction that enters a model's training data shapes the response delivered to every user who queries that topic.

2 weeks ago 2 0 1 0

🔍 A single published fact-check appears to make a difference. Debunked narratives were correctly rejected 93% of the time, compared to just 76% for unchecked narratives, with models echoing the specific vocabulary of the corrections they absorbed during training.

2 weeks ago 2 0 1 0

📊 While models correctly rejected 81% of fabrications and only directly repeated disinformation 3% of the time, responses left users unable to determine truth from fiction 16% of the time.

2 weeks ago 1 0 1 0

To test the resilience of these models, we audited four frontier models responses to narratives spread by state-influence operations and documented both how this "data void" vulnerability operates in practice and whether fact-checks serve as a potential countermeasure.

2 weeks ago 2 0 1 0

Millions of users now use LLMs to evaluate online claims. However, modern LLMs were trained on an open internet that state-sponsored influence operations were designed to pollute. When credible coverage of a topic is thin, fabricated content fills these voids, and LLMs absorb falsehoods unopposed.

2 weeks ago 1 0 1 0
Advertisement
OSF

Looking for something to read after the long week? Check out our new preprint "Fact-Checks Can Help Inoculate LLMs Against Disinformation"! osf.io/preprints/so...

Read a brief summary below ->

/1

2 weeks ago 3 2 1 0
Preview
Researchers waste 80% of LLM annotation costs by classifying one text at a time Large language models (LLMs) are increasingly being used for text classification across the social sciences, yet researchers overwhelmingly classify one text per variable per prompt. Coding 100,000 te...

📊 New preprint with @evavogel.bsky.social @morganwack.bsky.social @esserfrank.bsky.social: we tested whether you can batch-classify multiple texts in a single LLM prompt without losing coding accuracy.

Short answer: yes, up to about 100 items per prompt.

Paper: arxiv.org/abs/2604.03684

2 weeks ago 7 3 1 0
Post image Post image Post image Post image

Today, we presented the main results of the mental health days study 2025 (N = 8.177).

Results

> In May 2025, Austria implemented a nationwide smartphone-ban at schools
> Compared to 2024, smartphone use went down by 30 mins
> Life satisfaction went up (5.36 to 5.52)
> Depression sank (15% to 12%)

3 months ago 29 10 2 0
Post image

Interesting unpacking of deepfakes:

- darkfakes
- glowfakes
- foefakes
- fanfakes

“A blanket approach to “fighting deepfakes” risks treating satirical content the same as malicious attacks”

@morganwack.bsky.social & co in @techpolicypress.bsky.social

www.techpolicy.press/scrutinizing...

5 months ago 41 10 2 2

We've been following deepfakes for the last 7 years. This article aims to shed additional light on the topic by:

1) creating a conceptual typology of deepfakes

2) coining new concepts like 'glowfakes' and 'fanfakes'

3) & analyzing deepfakes from the 2024 elections

@grailcenter.bsky.social

5 months ago 18 14 0 1
Preview
Scrutinizing the Many Faces of Political Deepfakes | TechPolicy.Press Morgan Wack, Christina Walker, Alena Birrer, Kaylyn Jackson Schiff, Daniel Schiff, and JP Messina systematically analyzed political deepfakes.

'Darkfakes,' 'Foefakes,' 'Fanfakes,' and 'Glowfakes': Morgan Wack, Christina Walker, Alena Birrer, Kaylyn Jackson Schiff, Daniel Schiff, and JP Messina systematically analyzed political deepfakes and developed a classification that categorizes them along key dimensions.

5 months ago 27 12 2 1
Preview
The 2020 US election shows how state election policies can fuel conspiracy theories about voting | USAPP States that allowed pre-Election Day processing saw a reduction of over a third in expected misinformation compared to states with restrictive rules.

Recently published work from colleagues Morgan Wack (postdoc at University of Zurich) & Joey Schafer (UW PhD candidate) showing how state election policies that delayed vote counting fueled rumoring and conspiracy theorizing around the 2020 election: blogs.lse.ac.uk/usappblog/20...

6 months ago 60 17 2 0
Preview
The 2020 US election shows how state election policies can fuel conspiracy theories about voting | USAPP States that allowed pre-Election Day processing saw a reduction of over a third in expected misinformation compared to states with restrictive rules.

The 2020 US election shows how state election policies can fuel conspiracy theories about voting write @morganwack.bsky.social of @ikmz.bsky.social and @schafer.bsky.social of @uwnews.uw.edu

blogs.lse.ac.uk/usappblog/20...

6 months ago 4 2 0 0
Advertisement

How often do you see papers that suggest easy policies that could reduce electoral misinformation? Here's one I worked on with a great team out of UW and led by @morganwack.bsky.social and @schafer.bsky.social

9 months ago 33 15 3 0

Thrilled to finally see this paper out in print several years after @schafer.bsky.social and I started this project alongside @ikennedy.bsky.social, @beeeeeers.bsky.social, @emmaspiro.bsky.social & @katestarbird.bsky.social! Unfortunately the detrimental policies we discuss remain relevant.

9 months ago 22 4 0 0
Preview
<em>Policy Studies Journal</em> | PSO Public Policy Journal | Wiley Online Library Can state election policies affect the spread of misinformation? This paper studies the role played by ballot processing policies, which determine when ballots can be examined and organized, in the o...

Legislating Uncertainty: New paper about the 2020 election, showing how laws in certain states (specically laws that delayed the counting of mail-in ballots) increased uncertainty about election results and contributed to rumoring about election integrity: onlinelibrary.wiley.com/doi/10.1111/...

9 months ago 44 17 4 3

Proud to have co-led this paper with @morganwack.bsky.social (and other coauthors @ikennedy.bsky.social @beeeeeers.bsky.social @emmaspiro.bsky.social @katestarbird.bsky.social) looking at the impacts of state-level election laws on uncertainty and election integrity rumors!

9 months ago 16 5 0 1
Preview
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds A pro-Kremlin influence campaign used AI to boost disinformation output without undermining credibility, according to new research.

Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds

1 year ago 8 3 0 0
Screenshot from DCWeekly in October 2023, accessed through the Internet Archive.

Screenshot from DCWeekly in October 2023, accessed through the Internet Archive.

A study of a propaganda site with ties to Russia shows that using AI allows propagandists to dial up the volume of their content without sacrificing persuasiveness. The authors call for action to combat the threat. In PNAS Nexus: academic.oup.com/pnasnexus/ar...

1 year ago 2 1 0 0
Violin plot of NLI-derived topic scores for June (prior to AI adoption) and October (after AI adoption) of 2023

Violin plot of NLI-derived topic scores for June (prior to AI adoption) and October (after AI adoption) of 2023

A study of a Russian-backed propaganda outlet finds that AI is already being used to enhance messaging and expand disinformation campaigns, raising concerns about its growing impact on global influence operations.

In @sciencex.bsky.social: phys.org/news/2025-04...

1 year ago 4 2 0 0
Preview
Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign Abstract. Can AI bolster state-backed propaganda campaigns, in practice? Growing use of AI and large language models has drawn attention to the potential f

Here is the link to the full (open source) paper! 🔗
academic.oup.com/pnasnexus/ar... We welcome feedback & potential collaboration focused on how to counter emerging AI-driven disinformation campaigns!

1 year ago 0 0 0 0
Advertisement

Finding Three 📝: Even with the shift to AI, the persuasive potential and credibility of the articles persisted. This finding suggests that even in rapid scaling article production the website did not need to sacrifice its perceived authenticity or potential impact. 6/

1 year ago 0 0 1 0

Finding Two 📊: AI-use corresponded with greater topic breadth. By rewriting stories, the website covered more diverse subjects (from gun crime to the Ukraine invasion). Prompt leaks also suggest use of AI to rate potential materials by their alignment with campaign goals. 5/

1 year ago 2 0 1 0

Finding One 📈: AI use significantly increased the quantity of disinformation. This aligns with the idea that generative models reduce the cost/time of writing, editing, and curating. Once the site adopted LLM tools, weekly post counts soared. 4/

1 year ago 0 0 1 0

We focus on a site identified by the Clemson Forensics Hub that presented itself as a genuine U.S. news outlet but which was actually part of a Russian-affiliated influence operation. By pinpointing a transition away from human-editing to LLM-edited content, we show: 3/

1 year ago 0 0 1 0

There have been growing concerns about the use of large language models (LLMs) in the production of disinformation, but real-world evidence has been difficult to track. Our paper provides a direct look at a Russian-linked campaign which used AI tools to target Americans. 2/

1 year ago 0 0 1 0
Post image

🚨 Excited to see our new paper out at @pnasnexus.org w/@pwarren.bsky.social, Darren Linvill, & Carl Ehrett!

Using data from a Russia-backed influence operation running puppet website DCWeekly, we show how LLMs are being used to scale global disinfo campaigns: 1/ 🧵
academic.oup.com/pnasnexus/ar...

1 year ago 6 2 2 0