On this episode of the Tech Policy Press podcast, contributing editor Dean Jackson discusses the evolution of trust and safety with law scholars Danielle Keats Citron and Ari Ezra Waldman and Jeff Allen, chief research officer at the Integrity Institute. Listen:
Posts by Jeff Allen
A podcast in which @daniellecitron.bsky.social, @ariezra.bsky.social, @jeffallen.bsky.social and I discuss the state of online trust and safety. For @techpolicypress.bsky.social
www.techpolicy.press/considering-...
💬 Can AI break the engagement trap?
In August, our co-founder @jeffallen.bsky.social reflected on the evolution of Trust & Safety, and why regulatory pressure—and transparency—still matter most.
▶️ Read the full conversation on the @socialcohesiontech.bsky.social Substack: bit.ly/4neJfi5
Which also isn't great for the platforms. Hank Green quote: "If platforms don't prioritize high quality content, they dig their own graves. And Facebook has a lot of shovels."
Long term, it's not in the a platforms interest to push mildly engaging slop. Keeping an eye on FBs WVCR in the meantime...
Great article by @issielapowsky.bsky.social about the impact AI content is already having. Even if the "dead internet theory" isn't 100% true, if platforms don't properly disencentivize AI content, it can still hurt online publishing and content creators.
www.bloomberg.com/news/article...
🚨 NEW REPORT – Better Feeds: Algorithms That Put People First 🚨
As policymakers around the globe grapple with addressing concerns about algorithms, KGI’s new report offers guidance on improving the design of algorithmic systems that shape billions of users’ digital lives.
bit.ly/3QzxVzq
Excited to share 3 new resources designed to advance transparency & improve risk assessment practices in the platform ecosystem. These reports analyze current transparency efforts & provide guidance on how to obtain meaningful transparency from social platforms.
Read here: integrityinstitute.o...
New from me: Meta decided to stop working with U.S. fact-checkers at the same time as it’s revamping a program to pay bonuses to creators with high engagement numbers, potentially pouring accelerant on the kind of false posts the company once policed: www.propublica.org/article/face...
Haha!
@integrity-inst.bsky.social
But we haven’t really turned on the microphone yet here 🙈
👀🔎 Last week Big Tech companies published their first reports of how they assess the systemic risks of their platforms to users under the EU’s Digital Services Act. Here’s what we’ve found so far: 🧵
We're assembling a starter pack of Tech Policy Press contributors on Bluesky! Follow the folks in our community for perspectives on a range of issues at the intersection of technology and democracy!
Platform design choices impact the privacy, safety, and security of minors online. Independent research shows that content-agnostic design changes can support prosocial interactions and user well-being.
Learn more in KGI’s recent submission to the European Commission: shorturl.at/B00Jz
With @mattmotyl.bsky.social , Jenn Louie, Spencer Gurley, and Sofia Bonilla.
www.linkedin.com/in/jennslouie
www.linkedin.com/in/spencergu...
www.linkedin.com/in/sofia-g-b...
New piece in @techpolicypress.bsky.social from me and others the Integrity Institute!
Want a safer social media? Then we need more transparency from the platforms about the scale, cause, and nature of harms. A key way to change the incentives of the companies.
www.techpolicy.press/making-socia...
Check out the Safe by Default report from Panoptykon, which discusses concrete changes platforms can make to move from engagement based design and ranking towards more human centric recommender systems.
panoptykon.org/sites/defaul...
Really excited to see the Non-Engagement Con paper come out! This represents broad consensus views about ranking and recommendations from a lot of experts on all kinds of ranking and recommendation systems.
arxiv.org/abs/2402.06831
Searching for sensitive topics on Instagram often surfaces harmful content, but doing the same search on Google doesn't. Why? One ranks search results based on engagement, and one ranks them based on quality and relevance. Interesting analysis here from the Integrity Institute's Jeff Allen 👇
Why is Instagram search so much more harmful than Google search?
New article from me on Instagram's decision to completely block searches for sensitive topics, on the Integrity Institute blog.
integrityinstitute.org/blog/why-is-...
Big thanks to @katieharbath.bsky.social and Glenn Ellingson for being leaders in that work, and to the dozens of members who contributed!
Check out all our elections program work here:
integrityinstitute.org/elections-in...
The EC cited Integrity Institute work multiple times in their guidance to platforms on protecting elections!
Exciting moment! That is the result of a year of work by our community putting together our recommended best practices on election integrity.
digital-strategy.ec.europa.eu/en/news/comm...
The Integrity Institute has been very active on all things child safety, leading into the Senate hearing yesterday. Check out the recap!
integrityinstitute.org/blog/child-s...
And the preread ICYMI:
integrityinstitute.org/blog/child-s...
Read our full summary of the investigation here:
integrityinstitute.org/news/institu...
And the Wired article here:
www.wired.com/story/telegr...
It remains available in the app stores, and it wasn't in the list of platforms that will be regulated as VLOPs under the DSA. Telegram apparently reported that their monthly user count was below the threshold, but that just doesn't seem plausible and the EC is thankfully looking into it.
This poses a real challenge to societies: What do we do when a platform that is perfectly happy distributing violent content hits a billion users?
Telegram is my go-to example of why government regulation of social media platforms is necessary. But so far, it's been able to evade accountability.
And of course, the "restricting" of the channels is super leaky. There's plenty of examples of messages from restricted channels being shared in non-restricted channels. As with most solutions to problems on social media that attack the problems at a superficial level, it just doesn’t work!
Which is very strange! I will never understand thinking like "Oh, this channel has racist terms right in the channel name and lots of hateful messages about various ethnic groups. We definitely want to hide this from Google and Apple, but let's keep it on the platform."
Telegram hides lots of hate and violence filled channels from displaying on the version of the app from the Apple App Store and Google Play, but still keeps them on the platform and they are visible in a special "unrestricted" version of Telegram they put out, as well as the web browser version.
What do we do about Telegram?
New research investigation from the Integrity Institute and Wired out today:
www.wired.com/story/telegr...
and our summary here
integrityinstitute.org/news/institu...