Advertisement ยท 728 ร— 90

Posts by Deb Raji

Preview
10th Workshop on Technology and Consumer Protection (ConPro '26) The Tenth Workshop on Technology and Consumer Protection (ConPro '26) will explore computer science topics with an impact on consumers.

๐Ÿ“ฃ Registration is open for #ConPro26, the 10th Workshop on Technology and Consumer Protection, on May 21 in SF! We're excited about this year's program, which includes [thread, 1/15]:
conpro26.ieee-security.org

5 days ago 1 4 1 1

Tbf there were many critical perspectives included at that event and the actual discussion was handled very well! It was just an interesting moment of recognition for me about exactly how much product claims shape the discourse in the public, in research ...and in policy making in general!

2 weeks ago 6 0 0 0

Also common in policy circles. I once attended a legislative caucus event on the topic of "Artificial General Intelligence" (AGI). I was shocked lol - when I asked them where they'd learnt that word, the response was "we were talking to Anthropic, and that's what they told us they were building".

3 weeks ago 37 8 3 0

This is a challenging legal problem for NeurIPS (and other conference participants)! You might be wondering how this is possible given the First Amendment?

I wrote a quick explainer on the current status quo of relevant First Amendment cases & law to get you up to speed.

๐Ÿ”—๐Ÿ‘‡

3 weeks ago 8 5 1 3

I've come to seriously value any opportunity I get to create a safe space for those coming up behind me.

It's incredibly important - and I know this because I'm here, able to do this work, precisely because of those that chose to endure who knows what in order to create the right space for me โฃ๏ธ

1 month ago 34 6 0 1
Preview
No one has a good plan for how AI companies should work with the government | TechCrunch As OpenAI transitions from a wildly successful consumer startup into a piece of national security infrastructure, the company seems unequipped to manage its new responsibilities.

This annoys me. We do have federal procurement guidelines for AI - even under Vought & the Biden EO repeal. Reporting about this should note that this is not a normative "AI in govt" situation. The pentagon is (& always has been) asking for *special treatment* here!
techcrunch.com/2026/03/02/o...

1 month ago 18 3 2 0

Love this! And very much the anti-thesis of the current "AI maximalist" corporate ethos at Spotify, Amazon, etc, and some public organizations (schools, hospitals, govt) where AI use is forced upon workers and pushed out in as many use cases as is conceivable, without reason & to disastrous results.

1 month ago 13 4 1 0
Advertisement

Interesting - I agree that simple proxies should not be the only way to get to predictable model outcomes.

Tho I worry this criteria won't be well operationalized (ie "aligned"?). My pet theory is that model predictability is likely a byproduct of training *data* properties more than anything else.

2 months ago 10 0 1 0

๐Ÿฅฒ

2 months ago 2 0 0 0

Yeah I mean that's the cost of participating these things. In my case, he was working with an excellent producer that approached me with some very great initial questions - there's no way I could have known how this would eventually materialize in the final cut of the film, and I'm ok with that.

2 months ago 4 0 1 0

AI can cause harm even if it doesn't seem to affect *you*, even if those harms aren't observable or felt in any direct way by *your* future children. Being protected from certain harms doesn't make them less real or less important to address.

Educating the public requires taking this broader view.

2 months ago 38 10 1 1

And I guess, more embarrassingly, it pushes those same folks to fixate on the far-out sci-fi type narratives of a version of AI that is so powerful or so out of control that it finally does impact them, at least in ways that are visible or legible to them. I found the whole premise a bit unsettling.

2 months ago 14 3 1 0

I think with him I saw a version of a conversation I've encountered before in silicon valley circles - "how will this affect *me*? *My* children? *My community?" This tends to blind folks to a broader range of issues that likely won't affect them but will affect the marginalized or society at large

2 months ago 17 2 2 0

Me talking through AI definitions actually came from a longer exchange where I was trying to convince him that the real world issues we see today (& w past "AI" tech) are already worth taking action on, especially since it affects the poor, PoC, ie. vulnerable populations that don't look like him

2 months ago 22 3 1 0
Advertisement

It made me wonder: why are the real world harms perpetrated by AI today not enough for some people to feel urgency? to take action? to care?

What he was hearing from corporate execs & "doomers" was scaring him but the real world issues me & Karen brought up didn't seem to have the same effect..

2 months ago 28 8 1 2

Interviewed for this doc years ago & have yet to see the final cut. Most of what I recall is how much Daniel had truly riled himself up - the confusion & chaos of this exaggerated boogyman version of "AI" had excited a genuine emotional response, despite successfully disguising AI's real terrors.

2 months ago 39 12 1 0

The spiciest parts (ie. the interdisciplinary panel discussions & provocations) are unfortunately not online, but the full talks are all recorded and posted immediately to here as we go along -- highly recommended for those interested in the topic: simons.berkeley.edu/workshops/br...

3 months ago 4 0 0 0
Post image

Co-chairing this workshop at Simons this week & it's been amazing so far!

Brought together folks from ML, stats, law, sociology & beyond to discuss the messy middle between individual predictions & the actions/policy change individuals/orgs take in response to those predictions

3 months ago 10 0 1 1

A fascinating recent development is that the ML research community -- as the earliest adopters of "AI for research" -- are at the frontlines of dealing with all the problems that come with that (ie. reduced trust in results & reviewers, increased submission load etc).

Every other field is next! ๐Ÿ˜ญ

3 months ago 25 4 0 0

So we have tons of "experimental evidence" for the effectiveness of bed nets but not because they are particularly impactful as an intervention but just because they're by far the most *studied* - for reasons that are pretty much socially convenient and honestly not too far from arbitrary lol

3 months ago 28 0 2 1

And the wildest part of why we "have the most statistical evidence" about malaria nets specifically is that researchers who wanted to show off their new casual estimation method or experiment design and compare it to the duflo study, kept going back to that same group & doing experiments w bed nets.

3 months ago 12 0 1 0

As in, they explored other interventions to rest experimentally but dismissed them for reasons of pure convenience (ie "we already have a working relationship with group A, who gives out bed nets") or ethics (ie. Not ethical to randomly withhold malaria medicine from an at-risk population), etc.

3 months ago 13 0 1 0

EAs love malaria nets because it's supposedly the intervention where we have the "most statistical evidence" of it's effectiveness. One kind of silly fact about this tho is that if you read Duflo's actual paper, the choice to do the experiments on bed nets as an intervention is pretty much arbitrary

3 months ago 36 6 1 1
Advertisement
Preview
Announcement of Actions to Combat the Global Censorship-Industrial Complex - United States Department of State The State Department is taking decisive action against five individuals who have led organized efforts to coerce American platforms to censor, demonetize, and suppress American viewpoints they oppose....

The Trump administration is now barring 5 Europeans working on countering disinformation from the US โ€” including initiating deportation proceedings against 2 of them, including a former EU commissioner. All in the name of free speech. www.state.gov/releases/off...

3 months ago 57 46 2 6

"He took every Advanced Placement class he could, earned a scholarship to Brown and worked at Wawa over the summer to make enough money to buy a laptop, according to his two sisters."

4 months ago 2453 747 31 21
Preview
What Grover and Good Will Hunting can tell us about the limits of artificial intelligence First, a quick thank you to all the new subscribers to the Cognitive Resonance Substack!

One of my very early essays explored a paper by @rajiinio.bsky.social & others that centers the story of Grover and the Everything in the Whole Wide World Museum. It is striking to me that AI hyperscalers today really believe we can encapsulate all of human experience in data. I mean, good luck.

4 months ago 12 4 3 0

Just a stream of unfortunate news this week. My heart goes out to anyone impacted by this - truly tragic. ๐Ÿค

4 months ago 5 0 0 0

It's always so funny to me when people frame this kind of AI "distrust" as an unfortunate PR issue... in actuality people have very legitimate and materially supported reasons to be skeptical - trust is earned! We shouldn't strongarm people into trusting institutions & tech that doesn't serve them ๐Ÿ˜ณ

4 months ago 24 6 0 0

yay! Thanks so much for sharing these!

4 months ago 2 0 0 0
Preview
Careers โ€” Code for America Our team is made up of empathetic people working side by side with communities and government to solve society's toughest problems

Lots of new roles just went up at Code for America! codeforamerica.org/jobs/

4 months ago 12 6 0 0