Advertisement · 728 × 90

Posts by Vinay Koshy

And thanks to my coauthors Fred, Yi-Shyuan, Hari, @ceshwar.bsky.social , and @kkarahal.bsky.social for helping to make this paper happen

6 months ago 1 0 0 0

I think the broader framework for human-AI collaboration where we use predictive models to surface misalignments between decision-makers is a super promising one. Come talk to me about it!

6 months ago 0 0 1 0

One unexpected use-case for the system that came up in qualitative interviews: on-boarding new moderators.

6 months ago 0 0 1 0

Using simulations on static datasets, we show that under ideal conditions, Venire can approximate the decision consistency benefits of universal panel review while assigning only ~30% of cases to panel. Under less ideal conditions, this number increases to ~60%.

6 months ago 0 0 1 0

The paper itself is a kind of a three part journey. We discuss initial interviews with mods where we surface attitudes towards decision consistency as a policy outcome, plus quantitative evaluations of Venire's ML model, and a think-aloud study where moderators used Venire in a sandbox modqueue.

6 months ago 0 0 1 0

Moderation teams are usually resource constrained, so case-level rule enforcement is often handled by individuals. Venire's core goal is to do effective triaging -- identify the subset of cases where panel review is most urgently needed.

6 months ago 1 0 1 0

Just arrived in Bergen for CSCW, where I'll present Venire! Venire is a Reddit moderation tool that uses an ML model trained on mod decision histories to identify controversial cases. It preempts inconsistent decision-making by flagging these cases for multi-mod review

dl.acm.org/doi/10.1145/...

6 months ago 11 4 1 0