Advertisement · 728 × 90

Posts by Catherine King

Post image

One of my favorites paper got published 🤓 It covers a lot of ground and it’s the best summary of my views on misinformation and what to do about it. Give it a read :)

🔓 osf.io/preprints/ps...
👉 doi.org/10.1177/1461...

2 weeks ago 126 45 1 6
Post image

🧵 New report just dropped 🚨 "Fractured Reality: How Democracy Can Win the Global Struggle Over the Information Space" — from the EU Joint Research Centre, led by Mario Scharfbillig and I. A landmark read for anyone working on disinformation, platforms & democracy. 👇

1/10

1 week ago 369 205 7 23

Check out our latest work on how perceived fairness, effectiveness, and intrusiveness influence public support for misinformation interventions!

3 weeks ago 1 1 0 0

The Coalition for Independent Technology Research has filed a lawsuit with the @knightcolumbia.org and @protectdemocracy.org challenging a US State Department policy to revoke or deny visas of non-citizen researchers because of their work studying the societal impacts of technology. (1/4)

1 month ago 30 16 1 4
Post image

🚨New WP "@Grok is this true?"
We analyze 1.6M factcheck requests on X (grok & Perplexity)
📌Usage is polarized, Grok users more likely to be Reps
📌BUT Rep posts rated as false more often—even by Grok
📌Bot agreement with factchecks is OK but not great; APIs match fact-checkers
osf.io/preprints/ps...

2 months ago 119 48 2 3

Gosh, post 2/9 of the thread below had a link that required authentication, here is the direct link to the piece: www.science.org/doi/10.1126/...

2 months ago 17 7 0 2

Grok fact-checks our paper on Grok fact-checking - and it approves!

2 months ago 28 7 1 0
Preview
Teaching People to Counter Misinformation, Not Just Spot It - Center for Informed Democracy & Social - cybersecurity (IDeaS) - Carnegie Mellon University Spotlight on new work from IDeaS researchers

We also found that willingness to intervene depended strongly on who posted the misinformation, with participants indicating they were much more likely to counter close contacts.

See the paper and blog post for more details: www.cmu.edu/ideas-social...

2 months ago 0 0 0 0

In an experiment with government analysts, a short interactive training increased reported willingness to engage in countering actions (e.g., commenting a correction, messaging the poster) when encountering misinformation on social media.

2 months ago 0 0 1 0
Advertisement
Preview
Promoting Social Corrections: A Media Literacy Intervention for Misinformation on Social Media This paper examines a new approach to traditional media literacy training: encouraging social media users to engage in social corrections or other countermeasures online when they encounter misinforma...

Read my latest paper on media literacy, presented at last year's SBP-BRiMS Conference!

Instead of studying whether training helps people detect misinformation, we investigate whether it increases willingness to actively intervene when people see it.
doi.org/10.1007/978-...

2 months ago 1 1 1 0
Post image

Finding #3: Support for user-driven responses to misinformation is widespread across the political spectrum. This suggests counter-misinformation efforts that empower everyday users may have broad public legitimacy. 4/4

3 months ago 0 0 0 0

Finding #2: Relationships matter. People say they’re much more likely to correct misinformation when it’s posted by someone close (friends and family) than by acquaintances or strangers. This suggests that social proximity may shape the use of interventions. 3/4

3 months ago 0 0 1 0
Post image

Finding #1: There’s a large gap between beliefs and actions. People strongly value fighting misinformation, but report doing less themselves than what they think others should do. Good intentions don’t always translate into action. 2/4

3 months ago 0 0 1 0
Preview
Understanding User Behavior in the Fight Against Social Media Misinformation - Center for Informed Democracy & Social - cybersecurity (IDeaS) - Carnegie Mellon University Spotlight on new work from IDeaS researchers

I wanted to highlight one of my favorite papers that I co-authored last year with Samantha Phillips. Surveying 1,000+ U.S. social media users, we examined how beliefs and relationships shape whether people ignore, report, or correct misinformation. www.nature.com/articles/s41... 1/4

3 months ago 3 0 1 0
Post image

Finding #1: There’s a large gap between beliefs and actions. People strongly value fighting misinformation, but report doing less themselves than what they think others should do. Good intentions don’t always translate into action. 2/4

3 months ago 0 0 0 0
Post image Post image Post image Post image

🚨New WP🚨
We examine news sharing on 7 platforms:
1)Right-leaning platforms=lower quality news
2)Echo-platforms: Right-leaning news gets more engagement on right-leaning platforms, vice-versa for left-leaning
3)But low-quality news gets more engagement EVERYWHERE, even BlueSky!
osf.io/preprints/ps...

1 year ago 171 79 7 3
Advertisement
Preview
What our first measurement says about disinformation on major platforms in Europe - Science Feedback Science Feedback and partners have released a first measurement of Structural Indicators across six Very Large Online Platforms (VLOPs) in four EU member states (France, Spain, Poland, Slovakia). Belo...

🚨 New report out: the first cross-platform, cross-country baseline on misinformation in Europe

Based on large scale data analysis: ~2.6M posts (24B views) collected on Facebook, Instagram, LinkedIn, TikTok, X & YouTube

science.feedback.org/first-measurement-disinformation-major-platforms-europe

6 months ago 26 16 1 3
Proceedings of the ICWSM Workshops

Mapping the Scientific Literature on Misinformation Interventions: A Bibliometric Review (COMPASS workshop)
workshop-proceedings.icwsm.org/abstract.php...
@kingcatherine.bsky.social

9 months ago 2 1 1 0
Post image

Excited to have two workshop papers and one main conference paper that I've been involved in being presented at @icwsm.bsky.social! Thanks @kingcatherine.bsky.social and @evanup.bsky.social for letting me tag along. Details below.

9 months ago 9 3 1 0
Post image

Psychological inoculation is a very popular intervention against online misinfo, but it hasn't been tested using real-world outcomes in realistic scenarios.

In a new paper just published in PNAS Nexus, this is what we did: academic.oup.com/pnasnexus/ar...

Short version: It didn't really work.

10 months ago 202 68 5 3
Preview
Judge: 'Probable cause' to hold U.S. in contempt over Alien Enemies Act deportations The government sent several planeloads of alleged gang members to El Salvador, including 137 people under the act, the White House said at the time.

JUST IN: Judge finds 'probable cause' to hold U.S. in contempt over Alien Enemies Act deportations

1 year ago 37689 8119 1225 559
Preview
Modeling the amplification of epidemic spread by individuals exposed to misinformation on social media - npj Complexity npj Complexity - Modeling the amplification of epidemic spread by individuals exposed to misinformation on social media

New paper: Modeling the amplification of epidemic spread by individuals exposed to misinformation on social media 🧪

Simulations informed by social media data yield a worst-case bound on additional infections due to exposure to online vaccine misinfo. It's not good.

www.nature.com/articles/s44...

1 year ago 63 19 5 5
Preview
A path forward on online misinformation mitigation based on current user behavior - Scientific Reports Scientific Reports - A path forward on online misinformation mitigation based on current user behavior

This study examines individual-level interventions against #misinformation on social media, showing that encouraging people to respond to #misinformation can reduce its spread and prevent belief in it.

www.nature.com/articles/s41...

1 year ago 1 1 0 0
Preview
A path forward on online misinformation mitigation based on current user behavior - Scientific Reports Scientific Reports - A path forward on online misinformation mitigation based on current user behavior

Needed: ACTION!

"...participants believe individuals should expend more effort responding to misinformation on social media than those individuals report actually doing..."

Study: Online misinformation mitigation based on current user behavior www.nature.com/articles/s41...

1 year ago 30 9 3 1
Preview
IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship When thousands of fake accounts controlled by an unknown actor flood social media with some story, and platform algorithms amplify these messages, real...

IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship
osome.iu.edu/research/blo...

1 year ago 106 51 0 11
Post image Post image

🚨New WP🚨
Remember Musk+Zuck+Trump+Jordan etc crying fact-checker bias b/c Reps were flagged more than Dems? We analyzed Community Notes on Musk's X and guess what: posts flagged as "misleading" are 67% more likely to be written by Reps! The issue is Reps, not fact-checkers...
osf.io/preprints/ps...

1 year ago 455 121 6 9
Advertisement
Preview
Americans Expect Social Media Content Moderation Meta is ending fact-checking on Facebook and Instagram, but a new BU poll finds the public backs independent verification of social media content

After ending Meta's fact-checking partnerships, Mark Zuckerburg said that it had never been broadly accepted. According to a new poll by @bostonu.bsky.social, nearly 2 in 3 US adults agree that fact-checkers *should* verify claims on social media. A 🧵 (1/n). www.bu.edu/articles/202...

1 year ago 199 61 9 6
Post image Post image Post image Post image

🚨OpEd+data: Meta is out of step with public opinion🚨
Zuck cut moderation b/c he said people no longer want it. But he's wrong!
We polled 1k Americans and most people, including majority of Reps:
i) want content moderation
ii) don't want Community Notes w/o fact-checkers
thehill.com/opinion/tech...

1 year ago 234 80 6 7
Post image

New meta-analysis shows media literacy & psychological inoculation interventions signifcantly & substantially improve (a) misinformation resilience (d = 0.60), (b) misinformation discernment (d = 0.76) and decrease sharing (d = 1.04).

Great news for science!

journals.sagepub.com/doi/epdf/10....

1 year ago 101 34 3 4
Preview
Meta’s new hate speech rules allow users to call LGBTQ people mentally ill Changes to its hate speech guidelines were among broader policy shifts Meta made to its moderation practices.

Removing the fact-checking program from Meta platforms wasn't sufficient, you can now freely claim that women are "houshold objects" and that members of the LGBTQ community are "mentally ill". What a win for free speech - Zuckerberg is revealing his true colors.

www.nbcnews.com/tech/social-...

1 year ago 72 24 4 1