My presentation at the online "Ensuring Safe and Responsible AGI" event today will cover four possible futures. From 5:30pm UK time. Free to attend. Details and registration luma.com/vjuw8t86?tk=...
Posts by David Wood
If we care about upholding human flourishing and social justice, we need to embrace active transhumanism - my presentation, and astute replies from Professor Tracy Trothen and a number of London Futurists audience members, recorded yesterday www.youtube.com/watch?v=SZXc...
Abundance won't arrive unless it's clearer to many more people how it will benefit everyone - hence the vital need to highlight concrete positive steps that can be taken, here and now. Otherwise a destructive techlash will gather momentum
"A roadmap to abundant futures" - there were some great exchanges of ideas in this "Bread and Robots" conversation between Matteo Rossi MacDermant and yours truly breadandrobots.substack.com/p/a-roadmap-...
This Saturday, join 3 authors from the recently published collection of essays "Technologies of the Future Self: An Ethics for Transhumanist Flourishing" in a Zoom webinar discussing the strengths & weaknesses of transhumanist narratives about future human flourishing www.meetup.com/london-futur...
The conference will feature a keynote by HRH Dr. Haya Al Saud (Senior Vice President of Research at Hevolution), alongside prominent scientists such as Cynthia Kenyon, Felipe Sierra, Michael West and Aubrey de Grey. Details here www.berkeleycal.org
Anyone in or near Berkeley on 2-3 May should consider attending BerkeleyCAL, to join a stellar gathering of scientists, technologists, and global thought leaders, in a collective exploration of the frontiers of aging, longevity science, and their broader societal impact
For one example among many, see this thread of mine from a few days ago x.com/dw2/status/2...
Caveat: I'm only 65% of the way through listening to it (the audiobook is just over 15 hours long), and I can't be sure what I'll think of the final 35%.
But so far, I'm finding it fascinating. Every chapter is challenging me to reconsider some issues that I often think about
The new biography of Demis Hassabis, "The Infinity Machine", by Sebastian Mallaby, is *much* more interesting than I had anticipated.
Having read lots about Demis and DeepMind over the years, I didn't think I would learn much from it. I was wrong.
Two possible futures with advanced AI - my topic at the online event on 21st April with Jerome C. Glenn and Paul Epping of The Millennium Project. Hosted by Chris Parker of Ebullient. At 5:30pm UK time. Free to attend. Register here: luma.com/vjuw8t86?tk=...
Recorded for Easter 2026: "Resurrecting Humanity in the Age of AGI" - when astute comments by fellow panellist Roman Yampolskiy made me think harder than in almost every previous AI safety conversation. Thanks to Mihaela Ulieru for the questions! www.youtube.com/watch?v=XYa3...
Most of us have seen someone we love age faster than they should.
The biology of aging is no longer a mystery. The science exists to slow or even reverse it.
What’s missing?
Urgency.
Join the April 8 global rally via livestream
luma.com/b7b2d2n2?tk=...
#FundLongevity
A well-written article, evaluating the case for action using the ITN framework (Importance, Neglectedness, Tractability): "Rallies to support anti-aging research on April 8" forum.effectivealtruism.org/posts/pPoLeL...
And here are the themes that speakers are being encouraged to address. (The list is subject to change as plans for the event develop.) Which themes are the ones that strike you as being the most important?
Here's the overall framing for the event
Just announced, Sat 19th and Sun 20th Sept: "The Technoprogressive Opportunity: The future of ethics and emerging technologies". Jointly hosted by IEET and London Futurists. Free to attend. Speakers will be announced shortly, but RSVPs are already open at www.meetup.com/london-futur...
I strongly recommend this conversation, "How to Talk About AI Risk Without Scaring People Away", between Philip Trippenbach of the Seismic Foundation and John Sherman of the AI Risk Network www.youtube.com/watch?v=AAw2...
The article ends with an appeal to "flatten the curve" - in the general sense of “flattening the development of potentially dangerous AI”. Read the entire article here: magazine.mindplex.ai/post/the-big...
The real enemy we’re all facing is not each other, but confusion, distraction, fear, and ignorance. If we fail to overcome these shared enemies, the next major crisis - whether biological or technological - will truly be “the big one”.
Countering the anti-precautionary stance will be far from straightforward. That stance has strong links into dangerous aspects of human psychology. These links won’t be easy to dissolve. We'll need more than wishful thinking, tub-thumping bravado, and pleas for culture change.
My article highlights overlaps and psychological similarities between two groups who advocate anti-precautionary stances: those who disregard warnings about exponential growth of pandemics, and those who disregard warnings about exponential growth of dangerous AI systems.
As I argue in the article, opposition to the precautionary principle has become a kind of fad. But if we really understand risks such as “The Big One”, we’ll see that such "let it rip" opposition is a major mistake.
"The greatest danger is not that we fail to foresee catastrophe, but that we foresee it and still fail to act." That's from my latest article for Mindplex, "The Big One", where I reflect on key insights from the book of that name by Dr. Michael T. Osterholm and Mark Olshaker
Which argument would *you* select, as potentially the most challenging objection to funding research into how to reverse aging?
For more details about the event, and to register to attend, see www.meetup.com/london-futur...
My plan for the London Futurists event on 8th April: encourage attendees to split into tables where they pick one of these 14 arguments (or a different one from their own minds) against funding longevity science, and steelman it before presenting it to the room for a wider debate
It's encouraging to see the topic of AI safety reaching mainstream audiences via this 70 minute episode from Oprah. The personal narratives shared by different audience guests lead to a powerful call-to-action www.youtube.com/watch?v=wKrm...