Posts by Stanford Tech Impact and Policy Center
Among the key takeaways from their experiment with 500+ #SocialMedia users, the researchers found that anxiety actually makes people more likely to share on social media.
Why? Sharing can act as a coping mechanism as anxiety pushes people to seek out connection.
Read the research summary ⤵️
A new experimental study published in the Journal of Risk Research by @freiling.bsky.social of the University of Utah and TIP Center Postdoctoral Scholar Anja Stevic explores how people make decisions about sharing online, including the role emotions can play.
What happens when #AI shapes human disagreements?
Join us on Tuesday, April 21 for a seminar with @jonathanstray.bsky.social of the Center for Human-Compatible AI at @ucberkeleyofficial.bsky.social to explore research offering a glimpse into the turbulent future of AI-mediated conflict.
RSVP ⤵️
The answer? The first activity can buffer against the harm typically caused by the second.
Read the research summary ⤵️
Applying #SelfAffirmationTheory, TIP Center Postdoctoral Scholar Anthony Chen and Catalina Toma of the University of Wisconsin-Madison examined what happens when users engage with their own Instagram self-presentation before viewing the profile of a more successful peer.
Is #SocialMedia good or bad for your wellbeing? A decade of research has produced conflicting answers.
A new TIP Center study suggests one reason for the confusion: social media's effects on wellbeing may depend less on how much time you spend online & more on what you do & in what order you do it.
Join us on April 14 for a seminar with David Figlio of @urochester.bsky.social, who will share findings from his study examining the causal effects of #SchoolPhoneBans on student test scores, suspensions, and absences.
Register now to attend in person at Stanford or online!
🎟️ stanford.io/3NN5AIc
We are grateful for the opportunity to discuss our Australia evaluation work with Assembly Member Lowenthal, and are always encouraged to see policymakers engaged with research evidence.
Learn more about the bill: bit.ly/4dSFSN5
As U.S. states consider whether to institute age-based social media restrictions, such as Assembly Member Lowenthal's A.B. 1709 would do in California, the need for robust evidence into the effects of these policies is more important than ever.
We shared some of our recent research and findings related to online harms and the impacts of #SocialMediaDelay.
CA State Assembly Member @asmlowenthal.bsky.social dropped by for a visit with the TIP Center team.
We had a thoughtful conversation with him about our recent work as the lead academic partner for the evaluation of the Australian #SocialMedia ban for kids under the age of 16.
How can #AI help us become more efficient—and in what ways can it actually slow us down?
TIP Center Director Jeff Hancock joined TBS CROSS DIG with Bloomberg to discuss AI #workslop, the AI shadow economy, how to effectively use AI in the workplace, and more.
Watch the interview ⤵️
Coming up today at 12PM PT — join us at Stanford or online!
Deadline coming up! Always one of the best conferences of the year.
Jeff and fellow panelists, including Australia’s eSafety Commissioner Julie Inman Grant, provided testimony on Australia’s experience of the ban so far and how it will be evaluated.
Read the transcript from the panel and watch the full committee meeting. ⤵️
Jeff participated in a panel discussing Australia’s social media ban for children under the age of 16, which was implemented in December 2025 and for which the Stanford Social Media Lab is serving as the lead academic partner for evaluating its impact.
In March, TIP Center Director Jeff Hancock traveled to London to share expertise at a special evidence session hosted by the UK Parliament’s Science, Innovation, and Technology Committee exploring whether the UK Government should ban access to #SocialMedia for children under the age of 16.
Join us on Tuesday, April 7 as our #SpringSeminarSeries opens with a talk by Robbie Torney of Common Sense Media!
Robbie will examine what we've learned about the gap between current #AI design and kid and teen safety—and the implications for AI development, deployment, and policy.
RSVP ⤵️
One month left to submit your proposal to the 5th Annual #TSRConf!
Join leading researchers, practitioners, policymakers, and platform leaders shaping the future of online #TrustAndSafety and #DigitalGovernance.
📩 Submit your proposal and be part of the conversation!
🔗 https://bit.ly/4agUJ0f
Check out our latest work on how perceived fairness, effectiveness, and intrusiveness influence public support for misinformation interventions!
➤ Democrats and women showed greater support for interventions than Republicans and men, though fairness was the strongest predictor across all groups, especially for Republicans and men.
Read the full article: tsjournal.org/index.php/jo...
➤ Interventions that preserve user agency and transparency, like content labeling and fact-checking ads, were more popular than content or account removal.
Key findings include:
➤ Support is shaped by perceived fairness, effectiveness, and intrusiveness, with fairness being the most important factor overall.
Out now in the Spring 2026 issue of #JOTS, a new study by @kingcatherine.bsky.social, Samantha Phillips, and Kathleen Carley surveyed active American #SocialMedia users to understand what drives public support for #misinformation interventions.
#TrustAndSafety
Authored by Fatmaelzahraa Eltaher, Rahul Krishna Gajula, Luis Miralles-Pechuán, Patrick Crotty, Juan Martínez-Otero, Christina Thorpe, and Susan Mckeever
#JOTS #TrustAndSafety #SocialMedia #SocialMediaSafety
➤ The most common harmful exposure was low-severity but recurring material, such as mature themes and hate-related content, which the authors warn may normalize harm through repetition.
Read the full article: tsjournal.org/index.php/jo...
➤ Minor accounts encountered their first harmful video after just ~3 minutes of passive scrolling on YouTube Shorts and ~3:49 minutes on TikTok's "For You" feed, with no searching required.
Key findings include:
➤ Between 7.83 percent and 15 percent of videos recommended to accounts set as 13-year-olds were classified as harmful, which is roughly double the rate shown to 18-year-old accounts under the same conditions.