Advertisement · 728 × 90

Posts by Future of Life Institute

Preview
AI vs Cancer How AI Can, And Can't, Cure Cancer

Although superintelligence may not be the magical cure-all solution CEOs are making it out to be, Emilia explains how - with intentionally dedicated resources - AI can help us find a cure to this devastating illness.

🔗 Read Emilia's diagnosis and proposed roadmap now, at curecancer.ai

1 month ago 1 0 0 0
Post image

"Superintelligent AI will finally cure cancer" is the promise many AI companies are making to governments, investors, and consumers alike.

As they claim: more intelligence = more chances of finding a cure.

But as FLI's Dr. Emilia Javorsky lays out in a new essay, it's not nearly that simple.

🧵1/2

1 month ago 4 2 1 0
Preview
The Pro-Human AI Declaration The Pro-Human AI Declaration

1. Keeping Humans in Charge
2. Avoiding Concentration of Power
3. Protecting the Human Experience
4. Human Agency and Liberty
5. Responsibility and Accountability for AI Companies

🔗 Read the full Declaration & add your name now: humanstatement.org

🧵3/3

1 month ago 8 4 1 2

Leaders from both the Left and Right; parents; faith groups; labor unions; civil society organizations; and others came together to agree on 33 AI principles, across 5 key themes ⬇️

🧵2/3

1 month ago 3 1 1 0
Post image

Today, a broad coalition issued the Pro-Human AI Declaration, defining the goals of the growing Pro-Human Movement in response to Silicon Valley's destructive race to replace humans.

🧵1/3

1 month ago 16 7 2 3

They could inadvertently fuel escalation, and would easily proliferate, putting cheap, accessible, weapons of assassination and mass destruction in the hands of non-state actors and adversaries. They should be prohibited by the US and globally.”

🧵7/7

1 month ago 1 1 0 0

Even if they could be made effective, fully autonomous weapons would pose a threat not just to human dignity and liberty but to American national security:

🧵6/7

1 month ago 2 1 1 0
Advertisement

All AI systems should be under meaningful human control. This is especially true for those that could be used in the taking of human lives. Moreover, current AI systems are inherently unpredictable and fundamentally brittle, unsuited for very high stakes applications.

🧵5/7

1 month ago 1 1 1 0

We call on all AI companies to follow suit. However, our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law.

🧵4/7

1 month ago 2 1 1 0

We highly commend Anthropic, OpenAI and leading researchers from across AI companies for standing up for the principle that AI should never be used to kill people without meaningful human control, and that domestic mass surveillance of US citizens is a red line that should never be crossed.

🧵3/7

1 month ago 5 1 1 0

“Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty.

🧵2/7

1 month ago 1 1 1 0

Statement from Max Tegmark, Founder and Chair of the Future of Life Institute, in the aftermath of Anthropic refusing the Department of War's ultimatum:

🧵1/7

1 month ago 7 5 2 1
Future of Life Organizations - Communications Associate The Future of Life Institute (FLI) is hiring a Communications Associate to join our fast-paced and dynamic team! Our outreach projects currently include (but are not limited to) top-tier press outreac...

jobs.lever.co/futureof-lif...

1 month ago 0 0 0 0
Post image

💼 We're hiring! 💼

👉 FLI is looking for a Communications Associate to support our outreach team on media relations work, social media, and more.

📍Remote from the U.S., preferably on the west coast.

🔗 Learn more and apply by March 20th at the link in the replies, and please share widely!

1 month ago 4 3 1 0
Preview
Statement on Superintelligence “We call for a prohibition on the development of superintelligence, not lifted before there is (1) broad scientific consensus that it will be done safely and controllably, and (2) strong public bu...

Add your name: superintelligence-statement.org

5 months ago 5 3 1 8
Advertisement
Post image

AI companies are racing to build superintelligent AI, despite its many risks.

Let's take our future back.

📝 Sign the Superintelligence Statement and join the growing call to ban the development of superintelligence, until it can be made safely: superintelligence-statement.org

#KeepTheFutureHuman

5 months ago 10 11 5 3
Preview
Creative Contest - Keep The Future Human $100,000+ in prizes for creative digital media that engages with the essay's key ideas, helps them to reach a wider range of people, and motivates action in the real world.

keepthefuturehuman.ai/contest/

6 months ago 2 0 1 2
Post image

🎨 New Keep the Future Human creative contest!

💰 We're offering $100K+ for creative digital media that brings the key ideas in Executive Director Anthony Aguirre's
Keep the Future Human essay to life, to reach wider audiences and inspire real-world action.

🔗 Learn more and enter by Nov. 30!

6 months ago 3 0 2 1
Video

🚨 New AI systems.

❓ Growing uncertainty.

🤝 One shared future, for us all to shape.

"Tomorrow’s AI", our new scrollytelling site, visualizes 13 interactive, expert-forecast scenarios showing how advanced AI could transform our world - for better, or for worse: www.tomorrows-ai.org

8 months ago 4 1 0 1
Preview
2025 AI Safety Index - Future of Life Institute The Summer 2025 edition of our AI Safety Index, in which AI experts rate leading AI companies on key safety and security domains.

👉 As reviewer Stuart Russell put it, “Some companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.”

🔗 Read the full report now: futureoflife.org/ai-safety-in...

9 months ago 2 2 1 0

6️⃣ OpenAI secured second place, ahead of Google DeepMind.

7️⃣ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.

🧵

9 months ago 1 1 1 0

3️⃣ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).

4️⃣ Whistleblowing policy transparency remains a weak spot.

5️⃣ Anthropic received the best overall grade (C+).

🧵

9 months ago 2 1 1 0

Key takeaways:
1️⃣ The AI industry is fundamentally unprepared for its own stated goals.

2️⃣ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.

🧵

9 months ago 2 0 1 0
Post image

‼️📝 Our new AI Safety Index is out!

➡️ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.

So what were the results? 🧵👇

9 months ago 10 3 2 3
Video

‼️ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.

It’s a huge win for Big Tech - and a big risk for families.

✍️ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action

10 months ago 3 2 1 1
Advertisement
Preview
Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz) On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading. You can follow Zvi's excellent blog here: https://thezvi.substack.com Timestamps: 00:00:00 Preview and introduction 00:02:01 Sycophantic AIs 00:07:28 Bottlenecks for AI agents 00:21:26 Are benchmarks useful? 00:32:39 AI agent time horizons 00:44:18 Impact of automating research 00:53:00 Limits to scaling inference compute 01:02:51 Will the future go well for humanity? 01:12:22 A good plan for safe AI 01:26:03 What makes AI different? 01:31:29 AI in trading

bit.ly/434PInO

11 months ago 1 0 0 0
Video

🆕 📻 New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:

-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.

🔗 Tune in to the full episode now at the link below:

11 months ago 5 0 1 0
Preview
The Singapore Consensus on Global AI Safety Research Priorities Building a Trustworthy, Reliable and Secure AI Ecosystem. Read the full report online, or download the PDF.

🔗 Read more about these AI safety research priorities: aisafetypriorities.org

11 months ago 3 2 1 0

➡️ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.

11 months ago 4 0 2 2
Post image

‼️ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. 🧵⬇️

11 months ago 15 9 4 1