Advertisement · 728 × 90

Posts by Alice Hunsberger

Post image Post image

Hate to break it to you, tech bros, but punks claimed grindcore eons ago.

2 months ago 2 2 0 0
Preview
What you can expect from EiM in 2026 2025 was not for the faint-of-hearted but we got through it, mostly unscathed. Here, Alice and Ben reflect on what they learnt and what they’re hoping to achieve — for themselves and the T&S community...

What’s next for EiM in 2026?

More reporting on T&S roles, AI and the shifting online safety landscape.

We’re also creating more chances for the community to connect. 'Cos we need it.

Read @benwhitelaw.bsky.social & @aagh.bsky.social's plan:
www.everythinginmoderation.co/everything-i...

3 months ago 4 2 0 0
A woman in fishing gear with cardboard fish that say: bank account number, password, credit card number.

A woman in fishing gear with cardboard fish that say: bank account number, password, credit card number.

Happy Halloween! I do an online safety costume every year. This year, I’ve gone phishing.

5 months ago 8 0 0 0

Frontline teams will tell you that hate from customers is nothing new, but I really do think it will ramp up over the next few years, and I want to make sure our frontline teams don't suffer for it.

/🧵

1 year ago 8 1 0 0

Create resources and support for your users/ customers who may also be the target of harassment and hate. Signal to them in signs/ messages/ FAQs/ knowledge bases/ etc. that your team will support them.

1 year ago 7 0 1 0

Let your team use fake names when responding to the public, or (even better) don't use names at all.

Recognize that some open ways of signaling support (i.e. putting pronouns in your signature) will also open people up to harassment.

Allow folks to make decisions based on what's right for them.

1 year ago 5 0 1 0

Create psychological safety for your team.
Let them know that you have their back.
Listen to them.
Check in on them.

Make sure they have benefits that cover mental health support.
We're all going to need it.

1 year ago 1 0 1 0

It can be tempting to ask employees who are part of a marginalized community to help you create inclusive policies.

@anikacolliernavaroli.com calls this "compelled identity labor": hire people whose explicit job is to be an expert, instead of voluntelling your employees who have other jobs to do.

1 year ago 1 0 1 0

Create a "no questions asked" escalation policy, so that frontline staff can escalate to a manager if they feel unsafe or unable to answer a question.

Make sure that escalation chain goes all the way up to VP or C-Suite level so everyone is supported.

1 year ago 1 0 1 0

Write policies about expected user/ customer behavior, make them public, and hold people to them.

"We will ban you if you disrespect or threaten our staff", for example.

Or "We will ban you if you report trans people simply for being trans."

1 year ago 1 0 1 0
Advertisement

Get really clear with senior leaders of the company you work for about corporate values and how to uphold them.

Create tricky hypothetical scenarios (i.e. your biggest client sends a racist email; someone threatens to sue you for having a DEI program) and get answers BEFORE you need them.

1 year ago 1 0 1 0

The 47th president proudly spoke against trans people and anti-racism in his inauguration speech: people will now feel more empowered to spew hate.

If you manage support, marketing, trust & safety, etc. create a playbook NOW to support & empower your team to respond to hateful customers.

🧵

1 year ago 6 3 2 0

Huge thanks to the integrity institute and TSPA/ TrustCon for enabling these kinds of discussions among t&s folks.

If you know of other conversations/ resources in this area, or are an expert and want to be on the podcast or chat TrustCon proposals, let me know!

1 year ago 3 1 0 0

3️⃣ @anikacolliernavaroli.com writes about the harms for moderators from marginalized communities asked to work on content that attacks them.

www.cjr.org/tow_center/b...

1 year ago 6 2 1 0

2️⃣ @jenniolsonsf.bsky.social from GLAAD talks about advocating for the LGBTQ+ community with Meta; the challenges of balancing free speech w/ protecting marginalized communities; & suggestions for folks working at social media platforms to advocate for change.

integrityinstitute.org/podcast/its-...

1 year ago 3 1 1 0

1️⃣ Nadah Feteih discusses how tech workers (in integrity and t&s teams) can speak up about ethical issues at their workplace; activism from within the industry; compelled identity labor, balancing speaking up and staying silent, and more.

integrityinstitute.org/podcast/work...

1 year ago 3 1 1 0

Many of us working at tech companies are having to make moral and ethical decisions when it comes to where we work, what we work on, and what we speak up about. It's super difficult to know what to do, or even what your options are!

🧵 with resources

1 year ago 21 8 2 0
Alice dressed as the “this is fine” dog meme. She has on dog ears and is holding a coffee mug while the office behind her is burning.

Alice dressed as the “this is fine” dog meme. She has on dog ears and is holding a coffee mug while the office behind her is burning.

Alice grimacing while holding a “this is fine” meme toy. It’s a dog sitting on a dumpster on fire.

Alice grimacing while holding a “this is fine” meme toy. It’s a dog sitting on a dumpster on fire.

Every t&s professional I know this week.

❤️ to those who are doing their best in wild times.

1 year ago 19 2 0 0
Advertisement

THIS IS WHAT STOOD OUT TO ME. As someone who had to deal with user-report only systems for years… they do not work.

1 year ago 2 0 0 0

It's fascinating because right now content moderation and general vibes is a main differentiator between Threads and X. When Threads feels more like X, they'll be closer competitors than ever before.

Looking forward to more people here on Bluesky :)

1 year ago 2 0 0 0

Actually 1 more thing:

This allows meta to dodge responsibility. “The users don’t like it. They reported it. It’s not us.”

It won’t make moderation more fair or better. It’ll be less consistent.

But gives Meta an excuse that is more politically accepted right now.

1 year ago 3 0 0 0

This, combined with the rollback of hate policies, is REALLY going to change the vibes of Meta-run platforms.

/🧵

1 year ago 2 0 1 0

Honestly, I feel it’s often better to just not have the rule at all if you can’t proactively detect and remove violations.

Automated detection isn’t perfect by any means, but it’s a heck of a lot better than user reports alone.

1 year ago 2 0 1 0

Other users will have their content removed after being reported, but feel it’s unfair because so many other people got away with it.

1 year ago 2 0 1 0

Relying on user reports alone means that the platform will have very spotty enforcement of some rules.

Many users will get away with rules-violating behavior because it is never reported.

1 year ago 2 0 1 0

Policies are only as good as ENFORCEMENT— and consistent enforcement at that.

I have learned this the hard way, when I was head of t&s at a platform with little no to automated detection.

1 year ago 3 0 1 0
Advertisement

— if other people are being hateful and harassing others, then users will want to fight back/ pile on/ get involved.

… or they will want to leave.

1 year ago 2 0 1 0

If they see lots of folks doing something bad and it’s not immediately removed, they assume it’s ok and they won’t report.

Even worse, they will often start exhibiting the same behavior themselves.

1 year ago 2 0 1 0

Most users don’t read policies.

They’re not experts on which kinds of hate speech are ok and not ok (especially confusing on Meta’s platforms now, after recent policy changes).

Mostly, users go along with the vibe of a place.

1 year ago 3 0 1 0

One thing I haven’t seen anyone talk about with Meta’s moderation changes:

now relying on manual user reports for “less severe” issues that still violate their policies, rather than using a combination of user reports and automated detection.

It’s bad. Here’s why:

🧵

1 year ago 5 0 1 0