Advertisement · 728 × 90

Posts by Stanford Tech Impact and Policy Center

OSF

Explore the full article ⤵️

5 hours ago 0 0 0 0
Anxious but Posting? The Psychology of Sharing Online Why feeling uneasy and expecting social rewards can make us more likely to share about political topics online

Among the key takeaways from their experiment with 500+ #SocialMedia users, the researchers found that anxiety actually makes people more likely to share on social media.

Why? Sharing can act as a coping mechanism as anxiety pushes people to seek out connection.

Read the research summary ⤵️

5 hours ago 0 0 1 0
Post image

A new experimental study published in the Journal of Risk Research by @freiling.bsky.social of the University of Utah and TIP Center Postdoctoral Scholar Anja Stevic explores how people make decisions about sharing online, including the role emotions can play.

5 hours ago 0 0 1 0
Preview
Jonathan Stray | AI Can Make Conflict Worse or Better AI Can Make Conflict Worse or Better

What happens when #AI shapes human disagreements?

Join us on Tuesday, April 21 for a seminar with @jonathanstray.bsky.social of the Center for Human-Compatible AI at @ucberkeleyofficial.bsky.social to explore research offering a glimpse into the turbulent future of AI-mediated conflict.

RSVP ⤵️

3 days ago 1 1 0 0
Preview
The combined well-being effects of social media activities: how self-affirmation can buffer against upward social comparisons on Instagram Abstract. Social media activities do not occur in isolation, and it is possible that the well-being effects of an initial activity modify the effects of a

Read the full journal article via @hcr-journal.bsky.social ⤵️

1 week ago 0 0 0 0
Preview
When Self-Affirmation Meets Upward Social Comparison The emotional experience of social media may depend on what you do first

The answer? The first activity can buffer against the harm typically caused by the second.

Read the research summary ⤵️

1 week ago 0 0 1 0

Applying #SelfAffirmationTheory, TIP Center Postdoctoral Scholar Anthony Chen and Catalina Toma of the University of Wisconsin-Madison examined what happens when users engage with their own Instagram self-presentation before viewing the profile of a more successful peer.

1 week ago 0 0 1 0
Post image

Is #SocialMedia good or bad for your wellbeing? A decade of research has produced conflicting answers.

A new TIP Center study suggests one reason for the confusion: social media's effects on wellbeing may depend less on how much time you spend online & more on what you do & in what order you do it.

1 week ago 2 0 1 0
Post image

Join us on April 14 for a seminar with David Figlio of @urochester.bsky.social, who will share findings from his study examining the causal effects of #SchoolPhoneBans on student test scores, suspensions, and absences.

Register now to attend in person at Stanford or online!

🎟️ stanford.io/3NN5AIc

1 week ago 0 0 0 0
Bill Text - AB-1709 Covered platforms: account creation: age restriction.AB1709:v98#DOCUMENT AB 1709 Covered platforms: account creation: age restriction.

We are grateful for the opportunity to discuss our Australia evaluation work with Assembly Member Lowenthal, and are always encouraged to see policymakers engaged with research evidence.

Learn more about the bill: bit.ly/4dSFSN5

1 week ago 0 0 0 0
Advertisement

As U.S. states consider whether to institute age-based social media restrictions, such as Assembly Member Lowenthal's A.B. 1709 would do in California, the need for robust evidence into the effects of these policies is more important than ever.

1 week ago 0 0 1 0

We shared some of our recent research and findings related to online harms and the impacts of #SocialMediaDelay.

1 week ago 0 0 1 0
Post image

CA State Assembly Member @asmlowenthal.bsky.social dropped by for a visit with the TIP Center team.

We had a thoughtful conversation with him about our recent work as the lead academic partner for the evaluation of the Australian #SocialMedia ban for kids under the age of 16.

1 week ago 5 1 1 0
【AIを使うほど“ムダ”が増える】ChatGPT・Geminiの落とし穴「ワークスロップ」とは/修正コストで年14億円の損失/スタンフォード大学教授「“パイロット思考”で生産性を上げよう」【1on1】
【AIを使うほど“ムダ”が増える】ChatGPT・Geminiの落とし穴「ワークスロップ」とは/修正コストで年14億円の損失/スタンフォード大学教授「“パイロット思考”で生産性を上げよう」【1on1】 YouTube video by TBS CROSS DIG with Bloomberg

How can #AI help us become more efficient—and in what ways can it actually slow us down?

TIP Center Director Jeff Hancock joined TBS CROSS DIG with Bloomberg to discuss AI #workslop, the AI shadow economy, how to effectively use AI in the workplace, and more.

Watch the interview ⤵️

1 week ago 1 0 0 0

Coming up today at 12PM PT — join us at Stanford or online!

1 week ago 0 0 0 0

Deadline coming up! Always one of the best conferences of the year.

1 week ago 1 1 0 0

Jeff and fellow panelists, including Australia’s eSafety Commissioner Julie Inman Grant, provided testimony on Australia’s experience of the ban so far and how it will be evaluated.

Read the transcript from the panel and watch the full committee meeting. ⤵️

2 weeks ago 0 0 0 0

Jeff participated in a panel discussing Australia’s social media ban for children under the age of 16, which was implemented in December 2025 and for which the Stanford Social Media Lab is serving as the lead academic partner for evaluating its impact.

2 weeks ago 0 0 1 0
Advertisement
Post image

In March, TIP Center Director Jeff Hancock traveled to London to share expertise at a special evidence session hosted by the UK Parliament’s Science, Innovation, and Technology Committee exploring whether the UK Government should ban access to #SocialMedia for children under the age of 16.

2 weeks ago 2 2 1 0
Preview
Robbie Torney | What Three Years of AI Risk Assessments Teach Us About What Three Years of AI Risk Assessments Teach Us About Safety by Design for Kids and Teens

Join us on Tuesday, April 7 as our #SpringSeminarSeries opens with a talk by Robbie Torney of Common Sense Media!

Robbie will examine what we've learned about the gap between current #AI design and kid and teen safety—and the implications for AI development, deployment, and policy.

RSVP ⤵️

2 weeks ago 0 0 0 1
Post image

One month left to submit your proposal to the 5th Annual #TSRConf!

Join leading researchers, practitioners, policymakers, and platform leaders shaping the future of online #TrustAndSafety and #DigitalGovernance.

📩 Submit your proposal and be part of the conversation!

🔗 https://bit.ly/4agUJ0f

3 weeks ago 2 1 0 1

Check out our latest work on how perceived fairness, effectiveness, and intrusiveness influence public support for misinformation interventions!

3 weeks ago 1 1 0 0
Public Support for Misinformation Interventions Depends On Perceived Fairness, Effectiveness, and Intrusiveness | Journal of Online Trust and Safety Journal of Online Trust and Safety

➤ Democrats and women showed greater support for interventions than Republicans and men, though fairness was the strongest predictor across all groups, especially for Republicans and men.

Read the full article: tsjournal.org/index.php/jo...

3 weeks ago 0 0 0 0
Post image

➤ Interventions that preserve user agency and transparency, like content labeling and fact-checking ads, were more popular than content or account removal.

3 weeks ago 1 0 1 0
Post image

Key findings include:

➤ Support is shaped by perceived fairness, effectiveness, and intrusiveness, with fairness being the most important factor overall.

3 weeks ago 0 0 1 0
Post image

Out now in the Spring 2026 issue of #JOTS, a new study by @kingcatherine.bsky.social, Samantha Phillips, and Kathleen Carley surveyed active American #SocialMedia users to understand what drives public support for #misinformation interventions.

#TrustAndSafety

3 weeks ago 1 0 1 1

Authored by Fatmaelzahraa Eltaher, Rahul Krishna Gajula, Luis Miralles-Pechuán, Patrick Crotty, Juan Martínez-Otero, Christina Thorpe, and Susan Mckeever

#JOTS #TrustAndSafety #SocialMedia #SocialMediaSafety

3 weeks ago 0 0 0 0
Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video-Sharing Platforms | Journal of Online Trust and Safety Journal of Online Trust and Safety

➤ The most common harmful exposure was low-severity but recurring material, such as mature themes and hate-related content, which the authors warn may normalize harm through repetition.

Read the full article: tsjournal.org/index.php/jo...

3 weeks ago 0 0 1 0
Advertisement
Post image

➤ Minor accounts encountered their first harmful video after just ~3 minutes of passive scrolling on YouTube Shorts and ~3:49 minutes on TikTok's "For You" feed, with no searching required.

3 weeks ago 0 0 1 0
Post image

Key findings include:

➤ Between 7.83 percent and 15 percent of videos recommended to accounts set as 13-year-olds were classified as harmful, which is roughly double the rate shown to 18-year-old accounts under the same conditions.

3 weeks ago 0 0 1 0