This work shows how easily bias can form from basic cognitive principles, giving us a powerful new model to study the roots of discrimination.
Collaboration with my colleagues at DeepMind, and with Suzanne Sadedin and Wil Cunningham.
pnas.org/doi/10.1073/...
pnas.org/doi/10.1073/...
Posts by Edgar Duenez-Guzman
🔬 Finding #2: We distrust what we don't know.
Even without pre-programmed prejudice, when agents interacted more with their "own kind," they began to discriminate.
They learned to spot good/bad actors in their own group but saw the "other" group as one big, unknown risk.
🔬 Finding #1: Bias is a mental shortcut.
AI agents learned to take the "easy path." Instead of judging individuals on their actions, they used group identity as proxy for trustworthiness because it was faster.
The good news? An unbiased tool, like a reputation system, fixed it.
I once hoped discrimination was a dying relic. But what if it's a cognitive "bug"?
At #DeepMind, we asked a critical question: Could bias emerge on its own, even in AI without any human social baggage?
Our new research in @pnas.org has some startling answers.
#AI #Bias #Psychology #CognitiveScience
Honoured to give a Keynote at Doing AI Differently organised by The Alan Turing Institute, @edinburgh-uni.bsky.social, and @ukri.org.
"Defending human autonomy in the age of AI enshittification", what I consider to be the single most pressing problem facing humanity.
vimeo.com/event/498174...
I finished a video: youtu.be/1_JbJTeLZJs
New:
DOGE's spending has been secret.
No longer.
My colleagues have uncovered it.
www.propublica.org/article/doge...
The cruelty is the point, as @adamserwer.bsky.social
wrote in 2018
www.theatlantic.com/ideas/archiv...
I think most people are intuitively aware of this but something like the official White House account posting an “ASMR” video of shackled immigrants is not just cruel for its own sake, it’s intentional envelope-pushing to ramp up to desensitization over inhumane detention, camps, and deaths
Another example of bad AI practices, with a great take from Pluralistic. TL;DR, no, you cannot train an AI Vision model to predict MBA performance from headshots (other than to measure preexisting human bias).
pluralistic.net/2025/02/17/c...
Here's how to save trillions in government spending... Hint, Elon wouldn't like it:
prospect.org/economy/2025...
@sharky6000.bsky.social needs a peer into the future from the oracle! 😁
Ah, but that is a completely different question. For your original question I was assuming you meant "for humans", in which case it's probably true... There's only one fundamental way to express algorithms and systems, and all instantiations (prog langs) are isomorphic. But for aliens, likely not!
Fantastic take on... well... rationality, mostly, but specifically about how revolutionaries need to think about consequences of their tearing down the status quo
pluralistic.net/2025/01/13/w...
A brilliant colleague and wonderful soul Felix Hill recently passed away. This was a shock and in an effort to sort some things out, I wrote them down. Maybe this will help someone else, but at the very least it helped me. Rest in peace, Felix, you will be missed. www.janexwang.com/blog/2025/1/...
Felix Hill and some other DMers and I after cold water swimming at Parliament Hill Lido a few years ago
Felix Hill was such an incredible mentor — and occasional cold water swimming partner — to me. He's a huge part of why I joined DeepMind and how I've come to approach research. Even a month later, it's still hard to believe he's gone.
I really have no clue what is your point here. Sure, people in any field can be atrociously biased, wrong, or even malicious. We set up processes to achieve improvements of understanding _despite_ human biases. What's the alternative? What are you criticising, and what's better?
Huh? That's just not true. You might argue hubris is dangerous and can lead one to commit logical fallacies, sure. But that would be like saying to a medical doctor: "thinking you are systematically diagnosing is proof you are bad at it." Wtf?
Puzzling that you state "as an epidemiologist" but then hedge saying this is your personal opinion... Why an appeal of expertise with a vague message that sounds antivax? COVID might not be the worst disease, but there's evidence of long term lung damage with reinfections. You can be unconcerned
For Maths fans, 2025 is a square.
45² = 45 x 45 = 2025
Also,
9² x 5² = 2025
40² + 20² + 5² = 2025
My favourite?
1³+2³+3³+4³+5³+6³+7³+8³+9³ = 2025
#Mathematics #teaching #education
A real photo and perfect metaphor heading into 2025.
OpenAI whistleblower Suchir is dead. His mother has reasons to suspect foul play, below.
I hope that SFPD will reinvestigate.
His parents deserve justice, and given how central Suchir was to upcoming legal cases with hundreds of billions of dollars at stake, the world deserves answers.
In collective intelligence smarter units are not always better:
www.pnas.org/doi/10.1073/...
For AI the lesson is that just making AIs more intelligent isn't necessarily going to make them better tools (or companions, or assistants)
Breaking news: A 2020 paper that sparked widespread enthusiasm for hydroxychloroquine as a #COVID19 treatment has been retracted, following campaigning by scientists who alleged the research contained major scientific flaws and may have breached ethics regulations. scim.ag/4iR9bQ6
Yeah, but what's the fun in that? /s
More seriously, I get that being responsible is good and virtuous, but with the crazy incentives of competition in AI and tech in general, why would any rational company _not_ exploit the hype?
XD... but what about those who like spending 7 days in the office surrounded by collegues for great intellectual discussion, wouldn't those be left behind on those lonely days without the full team? /s
Compiled versus interpreted 😁