Advertisement · 728 × 90

Posts by Harry Yan

Preview
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search Large language models (LLMs) have raised hopes for automated end-to-end fact-checking, but prior studies report mixed results. As mainstream chatbots increasingly ship with reasoning capabilities and ...

Check out the paper for all the details: arxiv.org/abs/2511.18749

Thanks to my collaborators @yang3kc.bsky.social @harryyan.bsky.social and @fil.bsky.social .

4 months ago 11 2 0 0

In collaboration with @ryanmoore.bsky.social @fangjingtu.bsky.social and Dr. Jeff Hacock, and supported by Stanford Social Media Lab, and @stanfordcyber.bsky.social.

1 year ago 1 1 0 0

🌐 Big picture:
This study shows we should focus on building what we call digital strength:
a holistic skillset for navigating AI-mediated information environments--
Focused not just on detection skills
But also on cultivating open-minded thinking and evidentiary judgment (10/10)

1 year ago 2 1 0 0

🎯 Policy and design takeaway:
It’s not enough to teach people how to spot AI.

We also need to help them know when to trust authentic content.
Effective interventions must combine GenAI literacy, cognitive reflection training, and demographic targeting. (9/)

1 year ago 0 0 0 0
Post image

💡 But there’s hope.
Two factors helped:
🧠 Actively Open-Minded Thinking (AOT):
A cognitive tendency to consider evidence that challenges one’s prior beliefs.
📚 GenAI knowledge:
Factual understanding of generative AI.
AOT especially helped restore trust in real images—not just spot synthetics(8/)

1 year ago 0 0 0 0
Post image

👥 Who’s most vulnerable?

Older adults: more likely to doubt authentic images

Women: showed a larger accuracy gap than men

Partisans: more likely to doubt real images that conflict with their beliefs

#GenAI is amplifying existing digital and partisan divides. (7/)

1 year ago 2 0 0 0

📉 Why does this matter?
Because trust in authentic political imagery is eroding.
This isn’t just about deception—it’s about undermining visual evidence itself, leading to a "liar’s dividend":
real images get dismissed as fake. (6/)

1 year ago 0 1 0 0
Post image

📊 Key finding:
Participants over-attributed AI generation, labeling nearly 60% of all images as synthetic—even though only half were.
This "AI attribution bias" leads to:
✅ Higher accuracy detecting synthetic images
❌ Lower accuracy recognizing authentic images (5/)

1 year ago 1 3 0 0
Post image

👁️ We ran a large pre-registered experiment with 1,800 U.S. adults.
Participants evaluated political images balanced by party lean (pro-Dem vs. pro-Rep) and image type (authentic vs. AI-generated)— using actual images that circulated online during the election. (4/)

1 year ago 0 1 0 0

The answer is...Not exactly.

⚠️ BUT our study shows a different threat:
People have become suspicious of real images too.
Authentic visual evidence is no longer taken for granted. (3/)

1 year ago 0 1 0 0
Advertisement

🗳️ During the 2024 U.S. presidential election, many #GenAI AI-generated political images appeared on social media.
But did voters mistake them for authentic imagery? (2/)

1 year ago 0 1 0 0
Post image Post image

“Detecting Synthetic, Doubting Authentic: AI Attribution Bias for Political Imagery”
📍 Full preprint: osf.io/preprints/os...
🧵 Here’s what we found about how #GenAI is reshaping trust in political visuals during elections: (1/)

1 year ago 7 4 10 1
Preview
IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship When thousands of fake accounts controlled by an unknown actor flood social media with some story, and platform algorithms amplify these messages, real...

IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship
osome.iu.edu/research/blo...

1 year ago 106 51 0 11
Post image

One downside of submitting articles to multiple divisions is ending up with a lot more reviews to handle... Looking forward to seeing everyone in Denver next year! #ICA

1 year ago 1 0 0 0

An interesting paper about AI fact-checking from @matthewdeverna.com @harryyan.bsky.social @yang3kc.bsky.social @fil.bsky.social

1 year ago 16 5 0 0