The cable, sent to all U.S. missions on December 2, orders U.S. consular officers to review resumes or LinkedIn profiles of H-1B applicants - and family members who would be traveling with them - to see if they have worked in areas that include activities such as misinformation, disinformation, content moderation, fact-checking, compliance and online safety, among others.
Holy shit.
Reuters reporting that new admin instructions on visas are if you worked at a platform in trust & safety or content moderation or on fact checking or online safety at an platform you *and your loved ones* are ineligible for H-1B visa.
www.reuters.com/world/us/tru...
4 months ago
6803
3313
177
524
New Directions in Social Algorithms Research on October 16-17, 2025 at Yale University
As social media algorithms increasingly mediate social experiences, there has been a rapid increase in research on the effects of how these algorithms are configured, alternatives to engagement-centri...
📣 Yale workshop, Oct 16-17! 📣 How could/should content ranking work? What's new in content moderation? How can platforms promote civility? Hosted by Yale's Institute for Foundations of Data Science (FDS). Great speakers! Submit posters by 9/22! Spread the word! yalefds.swoogo.com/socialalgori...
7 months ago
27
15
2
3
New in TiCS w @dgrand.bsky.social @gordpennycook.bsky.social
It’s been ~10yrs since misinfo research exploded but our paradigms are stuck in the post-2016 “fake news” model
Time for new approaches:
o True/False → Content that misleads
o Belief → Behavior
o Eval interventions in ambiguous settings
8 months ago
97
39
3
1
🚨In PNAS🚨
The right often accuses fact-checkers of political bias
But we analyzed Community Notes on Musk's X and found posts flagged as "misleading" are 2.3x more likely to be written by Reps than Dems!
The issue is Reps sharing misinformation, not fact-checker bias...
www.pnas.org/doi/10.1073/...
10 months ago
350
131
7
12
Huge thank you to collaborators @mmosleh.bsky.social @eckles.bsky.social @dgrand.bsky.social
Comments, feedback, & suggestions appreciated as always!
1 year ago
4
0
0
0
Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Fi...
Caveats:
-Engagement ≠ belief updating (tho it’s an important first step)
-Social corrections can have other negative effects (eg downstream lower quality reposting)
dl.acm.org/doi/abs/10.1...
-Hard to measure (presumably positive) third-party effects of social corrections on observers in field
1 year ago
3
0
1
0
Our results demonstrate social media’s ability to foster engagement w corrections via minimal social relationships
Ppl are more likely to engage w those who have followed & engaged w them first
1 year ago
1
0
1
0
A second survey exp found that minimal social connections foster a general norm of responding, such that ppl feel more obligated to respond - and think others expect them to respond more - to ppl who follow them, even outside the context of misinfo correction
1 year ago
1
0
1
0
Advertisement
Exploratory analyses also show that in both survey & field exps, extreme partisanship moderates the effects of social connection on engagement - social connection increases engagement for co-partisans, but decreases engagement for politically extreme counter-partisans
1 year ago
1
0
1
0
We next conducted a follow-up survey on MTurk to replicate effects in a more controlled setting (eg eliminate blocking of counter-partisan bots) & obtained similar results
1 year ago
1
0
1
0
To account for this we (i) compare unaffected conditions (all but social counter-partisan) & (ii) perform principal stratification (weighting obs in unaffected conditions by p(success treat delivery) had they been in social counter-partisan condit)
1 year ago
1
0
1
0
Blocking of counter-partisan accounts drives political assortment on Twitter
Abstract. There is strong political assortment of Americans on social media networks. This is typically attributed to preferential tie formation (i.e. homo
Users were also more likely to block our bots in the social counter-partisan condition (consistent w our @pnasnexus.org paper on greater blocking of counter-partisans). But this resulted in differential treatment delivery- we could not send corrections to users who blocked our bots shorturl.at/eG5bs
1 year ago
1
0
1
0
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
Maybe users simply did not notice or believe the partisanship manip? Prob not: we looked at the follow-back rates in the social condition, & partisanship had a strong effect (consistent w our @pnas.org paper on greater follow-back of copartisans)
shorturl.at/B58Xh
1 year ago
1
0
1
0
We sent corrections to 1,586 users & measured p(engage w correction):
(i) Among users in the co-partisan condition, social connection had a sig positive effect on engagement
(ii) Among users in the baseline (non-social) condition, no evidence of effect of shared partisanship on engagement
1 year ago
1
0
1
0
Each user was then socially corrected by their randomly assigned bot. Social corrections were done via public reply to the tweet containing the debunked URL and included a link to the fact-check on @snopes.com
1 year ago
1
0
1
0
We created human-looking bots & corrected users who shared debunked URLs
We randomized whether our bots
(i) were co-partisan or counter-partisan for the to-be-corrected user
(ii) followed the user & liked some of their tweets before correcting them (creating a minimal social connection)
1 year ago
1
0
1
0
Advertisement
Social corrections, where users correct one another on social media, have been found to be effective in survey settings shorturl.at/SYomc
But in the field, social corrections are often ignored shorturl.at/jcxPd
We ask what *causes* greater engagement on Twitter (X)
1 year ago
1
0
1
0
🚨New in @plosone.org🚨
Corrections of misinfo are often ignored. What can drive engagement?
Twitter field exp & survey followups find
-Social ties matter: users more likely to engage w corrections from accounts who followed user
-Shared partisanship had smaller effects on engagement
shorturl.at/0Ycdp
1 year ago
19
12
1
1
Fact-checker Warnings Are Surprisingly Effective Even For Skeptics | SPSP
Even when people distrust fact-checkers, they’re still influenced by warning labels on false news.
Nice blog post by @cameronmartel.bsky.social on our NHB paper showing fact-checker warnings work even for people who distrust fact-checkers. Particularly relevant re Meta's rollback of fact checking based (among other things) on claim that fact-checkers lost publics trust spsp.org/news/charact...
1 year ago
26
6
0
0
New WP!
The illusory truth effect (repetition -> belief) is core to psych of beliefs, & thought to be a deep bias impacting misinfo, persuasion & advertising
Why would cognition include such a flaw? We argue it is a rational adaptation to high-quality info environments 🧵1/
1 year ago
74
30
4
2
🚨New WP🚨
Remember Musk+Zuck+Trump+Jordan etc crying fact-checker bias b/c Reps were flagged more than Dems? We analyzed Community Notes on Musk's X and guess what: posts flagged as "misleading" are 67% more likely to be written by Reps! The issue is Reps, not fact-checkers...
osf.io/preprints/ps...
1 year ago
455
121
6
9
Three new articles discuss Meta’s decision to drop fact-checkers and shift to a "Community Notes" model, sparking concerns about misinformation.
What's at stake? Here are three smart takes from experts.
A short thread: 🧵
1 year ago
10
5
1
0
Advertisement
APA PsycNet
HUGE thank yous to project co-lead @mmosleh.bsky.social (at @oiioxford.bsky.social) & @dgrand.bsky.social
Thoughts, comments, & feedback welcome and appreciated as always!!
Paper here: dx.doi.org/10.1037/xge0...
Preprint here: osf.io/preprints/ps...
1 year ago
0
0
0
0
Overall our results demonstrate the complex underpinnings of online partisan assortment
Partisans pref connect w like-minded others not only bc of recommendation algos - but bc of distinct info & social prefs
Party assortment is an enduring & important feature of social networks
1 year ago
0
0
1
0
We also found:
-Information & making friends were most mentioned as follow-back reasons
-Curiosity also oft mentioned, esp for counter-partisan follow-back
-Not wanting info (esp from counter-partisans) & id’ing account as a stranger were most mentioned rzns for ignoring accounts
1 year ago
0
0
1
0
We found:
-50% of ppl in co-partisan condition who followed-back account mentioned same partisanship as motivation
-58% of ppl in counter-partisan condition who ignored account mentioned diff partisanship as motivation
1 year ago
0
0
1
0
Ppl also wrote free-responses as to why they made their decision to follow-back or ignore accounts
We conducted exploratory text analyses using GPT4 on these explanations, filtering for answers longer than a few words & considered overall ‘coherent’ (n=515)
1 year ago
0
0
1
0
Advertisement