๐ฃ Registration is open for #ConPro26, the 10th Workshop on Technology and Consumer Protection, on May 21 in SF! We're excited about this year's program, which includes [thread, 1/15]:
conpro26.ieee-security.org
Posts by Deb Raji
Tbf there were many critical perspectives included at that event and the actual discussion was handled very well! It was just an interesting moment of recognition for me about exactly how much product claims shape the discourse in the public, in research ...and in policy making in general!
Also common in policy circles. I once attended a legislative caucus event on the topic of "Artificial General Intelligence" (AGI). I was shocked lol - when I asked them where they'd learnt that word, the response was "we were talking to Anthropic, and that's what they told us they were building".
This is a challenging legal problem for NeurIPS (and other conference participants)! You might be wondering how this is possible given the First Amendment?
I wrote a quick explainer on the current status quo of relevant First Amendment cases & law to get you up to speed.
๐๐
I've come to seriously value any opportunity I get to create a safe space for those coming up behind me.
It's incredibly important - and I know this because I'm here, able to do this work, precisely because of those that chose to endure who knows what in order to create the right space for me โฃ๏ธ
This annoys me. We do have federal procurement guidelines for AI - even under Vought & the Biden EO repeal. Reporting about this should note that this is not a normative "AI in govt" situation. The pentagon is (& always has been) asking for *special treatment* here!
techcrunch.com/2026/03/02/o...
Love this! And very much the anti-thesis of the current "AI maximalist" corporate ethos at Spotify, Amazon, etc, and some public organizations (schools, hospitals, govt) where AI use is forced upon workers and pushed out in as many use cases as is conceivable, without reason & to disastrous results.
Interesting - I agree that simple proxies should not be the only way to get to predictable model outcomes.
Tho I worry this criteria won't be well operationalized (ie "aligned"?). My pet theory is that model predictability is likely a byproduct of training *data* properties more than anything else.
๐ฅฒ
Yeah I mean that's the cost of participating these things. In my case, he was working with an excellent producer that approached me with some very great initial questions - there's no way I could have known how this would eventually materialize in the final cut of the film, and I'm ok with that.
AI can cause harm even if it doesn't seem to affect *you*, even if those harms aren't observable or felt in any direct way by *your* future children. Being protected from certain harms doesn't make them less real or less important to address.
Educating the public requires taking this broader view.
And I guess, more embarrassingly, it pushes those same folks to fixate on the far-out sci-fi type narratives of a version of AI that is so powerful or so out of control that it finally does impact them, at least in ways that are visible or legible to them. I found the whole premise a bit unsettling.
I think with him I saw a version of a conversation I've encountered before in silicon valley circles - "how will this affect *me*? *My* children? *My community?" This tends to blind folks to a broader range of issues that likely won't affect them but will affect the marginalized or society at large
Me talking through AI definitions actually came from a longer exchange where I was trying to convince him that the real world issues we see today (& w past "AI" tech) are already worth taking action on, especially since it affects the poor, PoC, ie. vulnerable populations that don't look like him
It made me wonder: why are the real world harms perpetrated by AI today not enough for some people to feel urgency? to take action? to care?
What he was hearing from corporate execs & "doomers" was scaring him but the real world issues me & Karen brought up didn't seem to have the same effect..
Interviewed for this doc years ago & have yet to see the final cut. Most of what I recall is how much Daniel had truly riled himself up - the confusion & chaos of this exaggerated boogyman version of "AI" had excited a genuine emotional response, despite successfully disguising AI's real terrors.
The spiciest parts (ie. the interdisciplinary panel discussions & provocations) are unfortunately not online, but the full talks are all recorded and posted immediately to here as we go along -- highly recommended for those interested in the topic: simons.berkeley.edu/workshops/br...
Co-chairing this workshop at Simons this week & it's been amazing so far!
Brought together folks from ML, stats, law, sociology & beyond to discuss the messy middle between individual predictions & the actions/policy change individuals/orgs take in response to those predictions
A fascinating recent development is that the ML research community -- as the earliest adopters of "AI for research" -- are at the frontlines of dealing with all the problems that come with that (ie. reduced trust in results & reviewers, increased submission load etc).
Every other field is next! ๐ญ
So we have tons of "experimental evidence" for the effectiveness of bed nets but not because they are particularly impactful as an intervention but just because they're by far the most *studied* - for reasons that are pretty much socially convenient and honestly not too far from arbitrary lol
And the wildest part of why we "have the most statistical evidence" about malaria nets specifically is that researchers who wanted to show off their new casual estimation method or experiment design and compare it to the duflo study, kept going back to that same group & doing experiments w bed nets.
As in, they explored other interventions to rest experimentally but dismissed them for reasons of pure convenience (ie "we already have a working relationship with group A, who gives out bed nets") or ethics (ie. Not ethical to randomly withhold malaria medicine from an at-risk population), etc.
EAs love malaria nets because it's supposedly the intervention where we have the "most statistical evidence" of it's effectiveness. One kind of silly fact about this tho is that if you read Duflo's actual paper, the choice to do the experiments on bed nets as an intervention is pretty much arbitrary
The Trump administration is now barring 5 Europeans working on countering disinformation from the US โ including initiating deportation proceedings against 2 of them, including a former EU commissioner. All in the name of free speech. www.state.gov/releases/off...
"He took every Advanced Placement class he could, earned a scholarship to Brown and worked at Wawa over the summer to make enough money to buy a laptop, according to his two sisters."
One of my very early essays explored a paper by @rajiinio.bsky.social & others that centers the story of Grover and the Everything in the Whole Wide World Museum. It is striking to me that AI hyperscalers today really believe we can encapsulate all of human experience in data. I mean, good luck.
Just a stream of unfortunate news this week. My heart goes out to anyone impacted by this - truly tragic. ๐ค
It's always so funny to me when people frame this kind of AI "distrust" as an unfortunate PR issue... in actuality people have very legitimate and materially supported reasons to be skeptical - trust is earned! We shouldn't strongarm people into trusting institutions & tech that doesn't serve them ๐ณ
yay! Thanks so much for sharing these!