The first PhilMod talk for 2026 will focus on the epistemology of posting and reposting!
Our guest will be Neri Marsili, Associate Professor in philosophy at the University of Turin.
Link to register: www.eventbrite.co.uk/e/philmod-ta...
Posts by Étienne Brown
I know you think I can see this post, but I can't.
Or shadow banning
Also possible the changes haven’t kicked in yet?
I thought the fact-checkers worked for third-party org and were not internal moderators. Find anything on Meta ending partnerships with third-party org?
Yesterday UCM Philosophy hosted @etiennebrown.bsky.social (San José State) for his talk "Recommended Selves: Authenticity and Algorithmic Filtering"
#philsky #philsci
www.youtube.com/watch?v=fDYz...
I don’t know the culture at Open AI, but I see your point. But doesn’t that mean we need to be more careful about who we allow to develop powerful models? Are we not really in trouble if Open AI doesn’t care that much?
Thanks! Two questions: (1) Does open source not entail the risk that people who don’t seriously care about safety will develop models too fast? (e.g. DeepSeek). (2) Can you not fight the concentration of private power through gov. regulation (public AI enterprises, antitrust, etc.)?
This is how I’ve always understood Aristotelians’ point about practical wisdom.
Time for an emergency session of Intro to Early Modern Philosophy?
"We don’t want the teenage kids to encounter bullying content only then to have to report it; we want them to be spared the burdens of encountering it in the first place."
Contrary to most takes I've read, Jeff argues that increasing the confidence an AI classifier requires to flag speech as hate speech is a defensible option, but ceasing to use classifiers to detect hate speech (as Meta is doing) is not.
Nuanced, cool-headed take on recent content moderation changes at Meta by the inimitable Jeff Howard.
I think that’s a flex. Dadcred.
I don't mind being rejected by a journal. What I resent is being rejected for good reasons.
And just to try to keep BlueSky a positive space: I think that debates about decentralized moderation are fascinating, and I'm grateful for advocates of decentralization to have proposed a way to diminish the power of Musk, Zuckerberg, etc. I see this as public service.
10/10 Let us say I try to make it so that you can't view hate speech on your FreeSpeechSky app; it's not obvious at all that I'm trying to regulate the public sphere. You might argue that I'm trying to unduly regulate something akin to your living room.
9/ I’ll end here, but I recognize that things are more complicated. One further idea to consider is that decentralization – and social media generally – blurs the distinction between public and private speech.
8/ And the Court ruled that Canadians would not want other Canadians to have access to violent pornography, partly on the grounds that it harmed women as a group (that's another debate, of course).
7/One example that comes to mind is R. v. Butler (1992), the Canadian Supreme Court case that led to a ban on certain kinds of violent pornography. The Court explicitly considered the question, “What do Canadians want other Canadians to be able to see in the public sphere?”.
6/ The question – “What speech do you believe other people should see?” – has always been important in the legal regulation of speech.
5/ In other words, my worry about hate speech is not that I will see it; it’s that the general circulation of hate speech in the public sphere has bad consequences. It’s demeaning. It’s psychologically harmful. It creates a social environment in which physical violence is more likely. Etc.
4/ Now, people who believe hate speech should be regulated do not primarily believe that because they personally don’t want to see it. They also believe other people should not see it. E.g. I don’t want Muslims to be constantly bombarded with slurs.
2/ Decentralized moderation, labels, and feeds allow users to customize the speech they will be exposed to online. For instance, I can choose not to see hate speech, but you can choose to see it.
Thanks again to @beaudoin.social for this essential thread. For me, the worry with decentralization is not that it will lead to echo chambers but that it will make it harder to curb the circulation of dangerous speech.
There is so much opacity about moderation and algo. recommendation that it’s hard to overstate how important this is.
There is an interesting tension in the philosophy of moderation between decentralization (i.e. customize your own personal public sphere) and democratization (same speech rules for everyone, but made democratically). This thread about decentralization is informative!
"Bluesky moderation lists create echo chambers."
A short thread about decentralized moderation on Bluesky and why it changes everything.* 🧵
*Based on nearly a thousand hours spent exploring the platform's code.