Advertisement · 728 × 90

Posts by Étienne Brown

Preview
PhilMod Talk - Neri Marsili Posting and Reposting - Investigating Reputation, Trust, and and Deniability in Online Communication

The first PhilMod talk for 2026 will focus on the epistemology of posting and reposting!

Our guest will be Neri Marsili, Associate Professor in philosophy at the University of Turin.

Link to register: www.eventbrite.co.uk/e/philmod-ta...

1 month ago 0 0 0 0

I know you think I can see this post, but I can't.

1 year ago 1 0 0 0

Or shadow banning

1 year ago 1 0 1 0

Also possible the changes haven’t kicked in yet?

1 year ago 0 0 0 0

I thought the fact-checkers worked for third-party org and were not internal moderators. Find anything on Meta ending partnerships with third-party org?

1 year ago 1 0 1 0
Étienne Brown: "Recommended Selves: Authenticity and Algorithmic Filtering"
Étienne Brown: "Recommended Selves: Authenticity and Algorithmic Filtering" YouTube video by UC Merced Philosophy

Yesterday UCM Philosophy hosted @etiennebrown.bsky.social (San José State) for his talk "Recommended Selves: Authenticity and Algorithmic Filtering"
#philsky #philsci

www.youtube.com/watch?v=fDYz...

1 year ago 12 5 0 0

I don’t know the culture at Open AI, but I see your point. But doesn’t that mean we need to be more careful about who we allow to develop powerful models? Are we not really in trouble if Open AI doesn’t care that much?

1 year ago 0 0 0 0
Advertisement

Thanks! Two questions: (1) Does open source not entail the risk that people who don’t seriously care about safety will develop models too fast? (e.g. DeepSeek). (2) Can you not fight the concentration of private power through gov. regulation (public AI enterprises, antitrust, etc.)?

1 year ago 1 0 3 0

This is how I’ve always understood Aristotelians’ point about practical wisdom.

1 year ago 1 0 0 0
Post image

Time for an emergency session of Intro to Early Modern Philosophy?

1 year ago 15 2 0 0

"We don’t want the teenage kids to encounter bullying content only then to have to report it; we want them to be spared the burdens of encountering it in the first place."

1 year ago 0 0 0 0

Contrary to most takes I've read, Jeff argues that increasing the confidence an AI classifier requires to flag speech as hate speech is a defensible option, but ceasing to use classifiers to detect hate speech (as Meta is doing) is not.

1 year ago 0 0 1 0
Preview
Étienne Brown on LinkedIn: Content Moderation Makeover: Meta’s Changes Are a Mixed Bag Nuanced, cool-headed take on recent content moderation changes at Meta by the inimitable Jeff Howard. Contrary to most takes I've read, Jeff argues that…

Nuanced, cool-headed take on recent content moderation changes at Meta by the inimitable Jeff Howard.

1 year ago 0 0 1 0

I think that’s a flex. Dadcred.

1 year ago 0 0 0 0
Advertisement

I don't mind being rejected by a journal. What I resent is being rejected for good reasons.

1 year ago 110 11 2 0

And just to try to keep BlueSky a positive space: I think that debates about decentralized moderation are fascinating, and I'm grateful for advocates of decentralization to have proposed a way to diminish the power of Musk, Zuckerberg, etc. I see this as public service.

1 year ago 1 0 0 0

10/10 Let us say I try to make it so that you can't view hate speech on your FreeSpeechSky app; it's not obvious at all that I'm trying to regulate the public sphere. You might argue that I'm trying to unduly regulate something akin to your living room.

1 year ago 0 0 1 0

9/ I’ll end here, but I recognize that things are more complicated. One further idea to consider is that decentralization – and social media generally – blurs the distinction between public and private speech.

1 year ago 0 0 1 0

8/ And the Court ruled that Canadians would not want other Canadians to have access to violent pornography, partly on the grounds that it harmed women as a group (that's another debate, of course).

1 year ago 0 0 1 0

7/One example that comes to mind is R. v. Butler (1992), the Canadian Supreme Court case that led to a ban on certain kinds of violent pornography. The Court explicitly considered the question, “What do Canadians want other Canadians to be able to see in the public sphere?”.

1 year ago 0 0 1 0

6/ The question – “What speech do you believe other people should see?” – has always been important in the legal regulation of speech.

1 year ago 0 0 1 0

5/ In other words, my worry about hate speech is not that I will see it; it’s that the general circulation of hate speech in the public sphere has bad consequences. It’s demeaning. It’s psychologically harmful. It creates a social environment in which physical violence is more likely. Etc.

1 year ago 0 0 1 0
Advertisement

4/ Now, people who believe hate speech should be regulated do not primarily believe that because they personally don’t want to see it. They also believe other people should not see it. E.g. I don’t want Muslims to be constantly bombarded with slurs.

1 year ago 0 0 1 0

2/ Decentralized moderation, labels, and feeds allow users to customize the speech they will be exposed to online. For instance, I can choose not to see hate speech, but you can choose to see it.

1 year ago 0 0 0 0

Thanks again to @beaudoin.social for this essential thread. For me, the worry with decentralization is not that it will lead to echo chambers but that it will make it harder to curb the circulation of dangerous speech.

1 year ago 2 0 1 0

There is so much opacity about moderation and algo. recommendation that it’s hard to overstate how important this is.

1 year ago 2 0 0 0

There is an interesting tension in the philosophy of moderation between decentralization (i.e. customize your own personal public sphere) and democratization (same speech rules for everyone, but made democratically). This thread about decentralization is informative!

1 year ago 3 0 1 0

"Bluesky moderation lists create echo chambers."

A short thread about decentralized moderation on Bluesky and why it changes everything.* 🧵

*Based on nearly a thousand hours spent exploring the platform's code.

1 year ago 15 8 1 3