Although superintelligence may not be the magical cure-all solution CEOs are making it out to be, Emilia explains how - with intentionally dedicated resources - AI can help us find a cure to this devastating illness.
🔗 Read Emilia's diagnosis and proposed roadmap now, at curecancer.ai
Posts by Future of Life Institute
"Superintelligent AI will finally cure cancer" is the promise many AI companies are making to governments, investors, and consumers alike.
As they claim: more intelligence = more chances of finding a cure.
But as FLI's Dr. Emilia Javorsky lays out in a new essay, it's not nearly that simple.
🧵1/2
1. Keeping Humans in Charge
2. Avoiding Concentration of Power
3. Protecting the Human Experience
4. Human Agency and Liberty
5. Responsibility and Accountability for AI Companies
🔗 Read the full Declaration & add your name now: humanstatement.org
🧵3/3
Leaders from both the Left and Right; parents; faith groups; labor unions; civil society organizations; and others came together to agree on 33 AI principles, across 5 key themes ⬇️
🧵2/3
Today, a broad coalition issued the Pro-Human AI Declaration, defining the goals of the growing Pro-Human Movement in response to Silicon Valley's destructive race to replace humans.
🧵1/3
They could inadvertently fuel escalation, and would easily proliferate, putting cheap, accessible, weapons of assassination and mass destruction in the hands of non-state actors and adversaries. They should be prohibited by the US and globally.”
🧵7/7
Even if they could be made effective, fully autonomous weapons would pose a threat not just to human dignity and liberty but to American national security:
🧵6/7
All AI systems should be under meaningful human control. This is especially true for those that could be used in the taking of human lives. Moreover, current AI systems are inherently unpredictable and fundamentally brittle, unsuited for very high stakes applications.
🧵5/7
We call on all AI companies to follow suit. However, our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law.
🧵4/7
We highly commend Anthropic, OpenAI and leading researchers from across AI companies for standing up for the principle that AI should never be used to kill people without meaningful human control, and that domestic mass surveillance of US citizens is a red line that should never be crossed.
🧵3/7
“Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty.
🧵2/7
Statement from Max Tegmark, Founder and Chair of the Future of Life Institute, in the aftermath of Anthropic refusing the Department of War's ultimatum:
🧵1/7
💼 We're hiring! 💼
👉 FLI is looking for a Communications Associate to support our outreach team on media relations work, social media, and more.
📍Remote from the U.S., preferably on the west coast.
🔗 Learn more and apply by March 20th at the link in the replies, and please share widely!
AI companies are racing to build superintelligent AI, despite its many risks.
Let's take our future back.
📝 Sign the Superintelligence Statement and join the growing call to ban the development of superintelligence, until it can be made safely: superintelligence-statement.org
#KeepTheFutureHuman
🎨 New Keep the Future Human creative contest!
💰 We're offering $100K+ for creative digital media that brings the key ideas in Executive Director Anthony Aguirre's
Keep the Future Human essay to life, to reach wider audiences and inspire real-world action.
🔗 Learn more and enter by Nov. 30!
🚨 New AI systems.
❓ Growing uncertainty.
🤝 One shared future, for us all to shape.
"Tomorrow’s AI", our new scrollytelling site, visualizes 13 interactive, expert-forecast scenarios showing how advanced AI could transform our world - for better, or for worse: www.tomorrows-ai.org
👉 As reviewer Stuart Russell put it, “Some companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.”
🔗 Read the full report now: futureoflife.org/ai-safety-in...
6️⃣ OpenAI secured second place, ahead of Google DeepMind.
7️⃣ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.
🧵
3️⃣ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).
4️⃣ Whistleblowing policy transparency remains a weak spot.
5️⃣ Anthropic received the best overall grade (C+).
🧵
Key takeaways:
1️⃣ The AI industry is fundamentally unprepared for its own stated goals.
2️⃣ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.
🧵
‼️📝 Our new AI Safety Index is out!
➡️ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.
So what were the results? 🧵👇
‼️ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.
It’s a huge win for Big Tech - and a big risk for families.
✍️ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action
🆕 📻 New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:
-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.
🔗 Tune in to the full episode now at the link below:
➡️ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.
‼️ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. 🧵⬇️