NIST used to do impressive work. I’m confused. youtu.be/ITXnzEq5vqg?...
Posts by Borhane Blili-Hamelin
Never mind that attributing capabilities specifically to general intelligence is difficult. Never mind that intelligence is difficult to define. Never mind that benchmarks don’t measure what we think they measure. Just trust the authors’ inference to the ‘best’ explanation.
The evidence is clear that these Nature authors enjoy strawman arguments about #agi. www.nature.com/articles/d41...
(the 'is darwinism just blind search' was a problem that evolutionary biologists took seriously, was debated throughout the 20th century, and the consensus is if you consider developmental factors constraining paths of development, it's not blind search. en.wikipedia.org/wiki/History... )
PSA, don’t go around hoping copy/pasting your pet psychometrics tricks will help. That’s bad science, bad politics, and bad engineering.
❌ www.agidefinition.ai
ML needs to treat the social, ethical, and political character of intelligence more seriously.
🔌 dl.acm.org/doi/10.1145/...
Why is why AGI isn’t a technical construct.
🔌 dl.acm.org/doi/10.5555/...
🔌 openreview.net/forum?id=1Rl...
Depends on whether they have anything to do with academic publishing?
I wanted to get upset about it but not in the mood for rage posting. I’ll plug papers though:
dl.acm.org/doi/10.1145/...
dl.acm.org/doi/10.5555/...
openreview.net/forum?id=1Rl...
Happy Henry Threadgill record release day!
I don’t mean the washed out ahistorical sense.
This hits home. I’ve long been mistrustful of moral and political conviction, including for values I cherish. Samantha argues that we live in a time when there’s no path for liberalism as a project without shedding complacency about the need argue for liberal values.
Urgh
Call for Papers! 🧵
The Public's Science–A New Social Contract for American Research Policy a Special Issue of The ANNALS of the American Academy of Political and Social Science
Editors: Alondra Nelson (IAS) and Jenny Reardon (UC, Santa Cruz)
Abstract Deadline: Sept 19
www.ias.edu/stsv-lab/pub...
LLMs are making the internet less secure because they generate false bug reports and overwhelm cybersecurity professionals. Some are halting their bug bounty programs as a result.
Don’t miss my rock-star co-authors! Happening today!!!!
I still blame Starbucks
I'll just say this: I find it just as offensive when americans are against people moving to a different city, as I do when they think I'm not allowed in America.
Our work deconstructing AGI as a concept is also now a reference in www.aisnakeoil.com/p/agi-is-not....
If you're a podcast person, other coverage includes this podcast from @techpolicypress.bsky.social! www.techpolicy.press/should-agi-r...
Our new work deconstructing the concept of "AGI", led by the brilliant @borhane.bsky.social in collaboration with @graziul.bsky.social , Hananel Hazan, @shiridoshi.bsky.social +more co-authors below, has been accepted to ICML. 🤗 We discuss the many "traps" that... 🧵 1/
arxiv.org/abs/2502.03689
Good news: this will appear in the ICML position paper track!
Accepted at @icmlconf.bsky.social, position paper track!!!
🧑🔬 Happy our article on Creating a Generative AI Evaluation Science, led by @weidingerlaura.bsky.social & @rajiinio.bsky.social, is now published by the National Academy of Engineering. =) www.nae.edu/338231/Towar...
Describes how to mature eval so systems can be worthy of trust and safely deployed.
A new paper by D&S’s Jacob Metcalf & Ranjit Singh, & affiliate @borhane.bsky.social, examines how emerging experimental publics are reshaping who gets to evaluate genAI, and how public feedback is being mobilized for safety, accountability, & democratic oversight. knightcolumbia.org/content/expe...
Just published - Experimental Publics: Democracy and the Role of Publics in GenAI Evaluation, by Jacob Metcalf, Ranjit Singh, and @borhane.bsky.social, the second essay written for our AI & Democratic Freedoms symposium (happening tomorrow and Friday!) knightcolumbia.org/content/expe...
I’m very confused about why research on LLM consistency and so-called hallucinations seem silo’d off from each other.
Shadows hands with noisy hands overlaid, a title reads “Noisy Joints: notes from a Critical AI Puppet Workshop” with a World Puppetry Day sticker.
Shadow hands overlaid with noisy hands: text reads, “Creating AI puppet shadows: Record yourself casting shadows against the noise within an AI model. Generate new hands from your human noise. Let the hands meet.”
A drawing of an AI generated glitch by Camila Galaz. A note: “Infinite Possibility -> Constraint Constraint Constraint. Puppet as glitch / performing the glitch. Plausibility. Who is the director? Who is the audience? Our understanding of the technology = puppeteers (string).” Beneath it: a sketch of data stick figure with an arrow to a stick figure called “self” and a stick figure named “predicted self” with an arrow leading back to data.
We have a little Zine preview on Instagram from “Noisy Joints,” our Critical AI Puppetry Workshop at the Mercury Store (with Emma Wiseman, @camilagalaz.bsky.social, @isilitke.bsky.social) for #worldpuppetryday2025 (themed “Robots, AI and the Dream of the Puppet”).
Most researchers don’t believe AGI is coming any time soon. But policy makers are steering policy toward AGI anyway. In this article for Tech Policy Press, a look at the distortions the AGI Frame introduces to policymaking. #ai #criticalai #aipolicy In @techpolicypress.bsky.social
New piece in Tech Policy Press builds on the collective work from the “AGI Should Not Be The North Star Goal of AI Research” paper with my own argument — that AGI frames also distort the perspective of policymakers shaping AI regulation.
www.techpolicy.press/most-researc... #aipolicy