Dive deeper into these complex moral questions and explore how a simple thought experiment has reshaped our relationship with philanthropy, AI, and our ethical duties to one another. Full series here: www.vox.com/future-perfe...
Posts by Bryan Walsh
. @tobyord.bsky.social argues we're at a critical "precipice"—where our actions today shape humanity's entire future. But even @petersinger.info, the original inspiration, questions this focus on AI existential risk.
From billionaires funding AI labs to college students changing their career paths to "save humanity," the push to prevent an AI apocalypse has reshaped philanthropy and activism. But at what cost to present-day issues?
Effective Altruism evolved from helping today's "drowning children" (like tackling malaria) to focusing intensely on preventing future existential threats—especially from AI. But is that shift justified?
Singer's provocative thought experiment inspired the Effective Altruism movement, shaping how some of the world's most influential people think about doing good. But has the idea gone too far? Listen here: megaphone.link/VMP7974297060
🧵 Episode 3 of @vox.com Future Perfect Good Robot podcast series with Unexplainable asks a powerful question: If you saw a child drowning right in front of you, would you save them—even if it ruined your suit and made you late? Easy, right? But philosopher @petersinger.info pushes this further...
Will fears of AI apocalypse distract us from the urgent, everyday harms already caused by biased algorithms? Dive into the Good Robot series, as we separate hype from reality, safety from ethics, and see what it takes to make AI truly good. Explore here: www.vox.com/future-perfe...
Future Perfect's own
@sigalsamuel.bsky.social joins host Julia Longoria to unpack the deeper issue: how AI often echoes religion—complete with rival denominations, prophets, and apocalyptic fears that distract from immediate ethical problems.
Meanwhile, researcher Dr. Joy Buolamwini found facial recognition tech couldn't recognize her face—unless she wore a white mask. Biases hidden in training data had profound real-world consequences, including wrongful arrests.
AI pioneer Dr. Margaret Mitchell once trained a system to describe images—only to see it mistakenly label devastating explosions as "awesome." Why? Because it learned from the internet, where sunsets are "awesome," but human suffering goes unseen.
Episode 2 of Good Robot, our collaboration between
@voxdotcom.bsky.social 's Unexplainable and Future Perfect, is here! This week, we explore how good intentions in AI research can sometimes lead to deeply troubling results.
If you think news should reflect all of reality—not just the worst parts—share this. We don’t need delusion, but we do need balance. Let’s make Good News a counterweight to the negativity overload. You can read the first edition here www.vox.com/future-perfe...
Skeptical? I get it. But try one edition and see if it changes how you see the world. Worst case? You’ll have something new to argue about. Best case? You’ll realize the future isn’t as bleak as it seems. Sign up here: www.vox.com/pages/good-n...
This is what Good News is about—not sugarcoating reality, but seeing all of it. If you think the world is beyond saving, you’re not paying attention. Optimism isn’t naive—it’s necessary for solving real problems. We need to see what works.
We’re wired for negativity. The media amplifies disaster, outrage, and conflict because that’s what gets clicks. But what if I told you the world is improving in ways that barely make the headlines? And that ignoring this makes us less informed?
2/ You don’t have to be an optimist to read Good News. You just have to want a fuller, more accurate view of the world. Every week, I’ll highlight real, measurable progress that’s too often ignored. Sign up and judge for yourself: www.vox.com/pages/good-n...
1/ 🚨 NEW: I’m launching Good News, a newsletter for @vox.com that challenges one of the most overlooked biases shaping our worldview—the bias for bad news. We overestimate crisis and underestimate progress. If we care about accuracy, we need to fix that.