What happens when #AI shapes human disagreements?
Join us on Tuesday, April 21 for a seminar with @jonathanstray.bsky.social of the Center for Human-Compatible AI at @ucberkeleyofficial.bsky.social to explore research offering a glimpse into the turbulent future of AI-mediated conflict.
RSVP ⤵️
Posts by Jonathan Stray
I agree there's a real argument AI will lead to homogenization. It's also striking that, until very recently, everybody was wringing their hands about "fragmentation" and "filter bubbles." Without a theory on how much diversity vs. commonality we want in media, we will remain confused and reactive.
I agree there's a real argument AI will lead to homogenization. It's also striking that, until very recently, everybody was wringing their hands about "fragmentation" and "filter bubbles." Without a theory on how much diversity vs. commonality we want in media, we will remain confused and reactive.
Writing about attested.network, an open spec for decentralized proof of payments on ATProtocol. It builds on what we learned making atprotofans.com. Draft is up and feedback is welcome.
That may be true! The question I'm addressing with this data is whether the level changed, on average. Could have gone up for you.
Stray's analysis reveals the core measurement problem: external researchers can only count hateful posts produced, whilst X claims to reduce hateful impressions through algorithmic suppression. Without internal data, we're arguing over different metrics entirely.
Excited to see my #AtmosphereConf talk up on a legacy platform:
Did "hate" go up on X/Twitter after they changed their content moderation rules post-Musk? What is "hate" anyway, and what data do we actually have? I've got all the charts and analysis I know you crave!
New by me for @techpolicypress.bsky.social
www.techpolicy.press/what-are-the...
As far as I can tell, the new X algorithm just completely ignores who you follow when selecting out-of-network posts in the For You feed. This might be why it feels, to some people at least, less personal or cozy.
www.greenearth.social/p/making-soc...
But let's be honest: there's a large cadre that basically cheers on chasing off any lib/centrist/academic who's the punchbag of the day. There's a culture of saying "fuck off back to X, then". And the anti- bedtime leftists set too much of the culture.
Millions of dollars per year, when you include salaries.
I guess I just thought that it's because the field is today overwhelmingly staffed by lefties. There was well developed fascist sociology in Mussolini's Italy. E.g. Corrado Gini of the Gini coefficient.
Can feed algorithms build community? We analyzed the old and new Twitter source code to figure out why it seems like a less friendly place than it used to be. We're designing GreenEarth to connect people instead.
www.greenearth.social/p/making-soc...
Game theory does not have a way to distinguish "conflict" from "competition." I think the difference is whether people are using destructive moves (e.g. murder) to win the game. Such moves are defections in the meta-game of peace and security.
It now seems likely that machines will soon become much smarter than humans. But will our superintelligent machines finally bring an end to war?
Here's where current AI fails, and what we can do about it.
www.betterconflictbulletin.org/p/conflict-s...
Folks have been reaching out about how to support Graze beyond our core product. We think that support needs to go beyond Graze.
That's why we've teamed up with our direct competitors @skyfeed.app and @blueskyfeedcreator.com to jointly support all our work:
Hey @graze.social @skylight.social @pfrazee.com wanted to make sure you'd seen this.
Hey #atmosphereconf we just proved that the right feed algorithms can reduce political polarization. And now we're building open-source recommender infrastructure that you can use for your ATProto app.
rankingchallenge.substack.com/p/its-possib...
What’s the zeitgeist experiment?
A peer-reviewed competition, 5 winning algorithms, 9,386 users, three platforms, 6 months, and 43 authors -- that's what it took to prove that social media doesn't have to work the way it does now. Here's the paper.
rankingchallenge.substack.com/p/its-possib...
/FIN
We often get asked "will platforms actually adopt this?" and "can I try it?" So now we're building open source, AI-powered recommender infrastructure right here for BlueSky (or any ATProto app) where users can already pick their own algorithms.
greenearthsocial.substack.com/p/introducin...
8/
Unlike one-shot depolarization interventions, algorithm changes are structural. At 6 months we likely captured equilibrium effects, meaning sustainable results. And we couldn't measure network effects or change creator incentives, all of which would plausibly amplify results.
7/
The tradeoff: users reported slightly worse experiences (-0.04 SD) on the Neely Index. Maybe our research-grade algorithms were unpolished. Or maybe it's because seeing depolarizing content, like news, can be kind of a downer. This is an honest tension worth taking seriously.
6/
Active time and engagements fell on Facebook and Reddit, but *increased* on X/Twitter. In any case the changes were small. Reducing polarization doesn't necessarily mean reducing engagement. Not every platform is on the frontier of good for society and good for business!
5/
The two strongest algorithms: Uprank Bridging + Downrank Toxic, and Add News. Notably, upranking constructive content alone wasn't enough, we also needed to reduce the prominence of outrage and toxicity. But merely adding diverse news, without any re-ranking, also worked!
4/
The result: affective polarization dropped by an average of -0.03 SD (p<0.05), a 1.5 point shift on the 100-point feeling thermometer. By comparison, US polarization has risen ~0.6 points/year, so this is like reversing roughly 2.5 years of increase.
3/
We ran an international competition to design new social media algorithms, then used a browser extension to intercept users' feeds and re-ranked them in real time. Each algorithm could re-order, add, and/or delete posts and comments on Facebook, X/Twitter, and Reddit.
2/
Could social media make us less polarized instead of more?
We tested 5 algorithms on 3 platforms with 10,000 people for 6 months during the 2024 election, and found that the answer is yes.
🧵
Have you tried vibe coding this? I'm serious.
On ArXiv soon!