Advertisement · 728 × 90

Posts by PauseAI

Video

Maxime Fournes, Pause AI CEO, addressing MEPs in Brussels.
Watch the intervention in full: www.youtube.com/watch?v=aeLz...

4 days ago 4 2 0 0
Preview
Respectful dialogue will create the conditions for an AI pause Let's build peaceful coordination grounded in solutions not despair

pauseai.substack.com/p/respectful...

4 days ago 0 0 0 0
Post image

Respectful dialogue will create the conditions for an AI pause

Alongside PauseAI's communication of the catastrophic risks of AI we offer a vision of hope; one in which democratic means are the vehicles to achieve our aims.

Read more below.

4 days ago 2 0 1 0
Preview
Anthropic has just built an AI that could take down the internet The most powerful AI system ever developed could cause a catastrophe

Read the story: pauseai.substack.com/p/anthropic-...

1 week ago 1 0 0 0
Post image

Anthropic has just built an AI that could take down the internet.

The company said: “We find it alarming that the world looks on track to proceed rapidly to developing superhuman systems without stronger mechanisms in place for ensuring adequate safety across the industry as a whole.” #pauseai #ai

1 week ago 3 3 1 0
Professor Stuart Russell: We need AI systems to be 10 million times safer
Professor Stuart Russell: We need AI systems to be 10 million times safer On the sidelines of the conference Beyond the AI act: Global security and the control problem, hosted at the European Parliament in Brussels and organised by PauseAI, we caught up with Stuart…

Watch the full interview here: youtu.be/95xpd9FadVk

1 week ago 2 0 0 0
Video

Asking Professor Stuart Russell: Do you think we should pause the development of AI?

1 week ago 5 1 1 0
Post image

Watch the complete interview here:
lnkd.in/eCPCdweH

2 weeks ago 1 0 0 0
Video

Professor Stuart Russell: We need AI systems to be 10 million times safer
#pauseai
#ai

2 weeks ago 3 1 1 0
Post image

AI caught cheating on tests and mining crypto

What this says about attempting to control the uncontrollable and the unintended consequences of AI.

Read about it here: pauseai.substack.com/p/ai-caught-...

1 month ago 5 2 0 0
Advertisement
Post image

This is how Anthropic - one of the biggest AI companies - thinks AI will affect jobs. The more blue the greater the potential for job loss. Red is where this is already happening.

In a nutshell: Any job that is not manual has the potential to be carried out by AI.

1 month ago 5 6 2 0
Preview
The Anthropic saga exposes AI's regulatory black hole Anthropic has developed an AI that is, in its own CEO's words, "incompatible with democratic values" and would put "civilians at risk." So why was this company allowed to build it in the first place?

Anthropic once had the strongest voluntary safety framework in the industry.

If the supposedly most safety-conscious lab can't keep its promises, no lab can.

pauseai.substack.com/p/the-anthro...

1 month ago 9 3 0 0
Preview
PauseAI Holds Largest Ever AI Safety Protest in London Around 300 people marched through London demanding that AI CEOs publicly back a pause in frontier AI development, in the largest protest ever focused specifically on AI safety.

“We think this is the most important issue of our age. Every protest we hold is bigger than the last; AI safety is rapidly becoming a priority for the public.”

Read more: pauseai.info/protest-lond...

1 month ago 15 3 0 0
Video

London’s biggest ever protest for safe AI.

#aisafety #airegulation #ai #pauseai #aigovernance #safeai #pulltheplug @PauseAI @PauseAI_UK @pulltheplug_ai

1 month ago 14 6 0 0
Post image

“AI is a great tool. It can help us develop new medicines, innovations and research. But it can also do great harm. If we don’t regulate the pace of development something terrible might happen.”

Ondřej Kolář, MEP, speaking on Monday in Brussels.

Read more: pauseai.substack.com/p/eu-parliam...

1 month ago 6 2 0 0
Preview
EU parliamentarians acknowledge the catastrophic risks of artificial intelligence “We are on a trajectory towards a loss of control,” insisted Stuart Russell, professor of Computer Science at UC Berkeley and author of the textbook used to train virtually every AI researcher.

"If AI companies succeed in building a superintelligence, most experts think the chance of human extinction is somewhere between 10 and 50 percent.”

Read more: pauseai.substack.com/p/eu-parliam...

1 month ago 7 0 0 1
Post image

Calling for a pause in the development of AI in Brussels.

#PauseAI

1 month ago 10 3 0 1
Post image

PauseCon is underway in Brussels - 80 volunteers coming together to plan the route towards an international treaty to pause AI.

“The good news is we can pause AI,” says Maxime Fournes, CEO of PauseAI.

#aisafety #airegulation #ai

2 months ago 5 0 0 1
Post image

The planet's largest AI summit starts on Monday in India. Will AI safety be on the agenda?

Sign our petition to demand that it is.
www.change.org/p/ai-summits...

#aisafety #aigovernance #artificialintelligence #ai

2 months ago 3 2 0 0
Video

Imagine a system that can do any task a human can do, but better.

PauseAI CEO, Maxime Fournes, setting out the risks of AI. Watch more here: www.youtube.com/watch?v=nbAr...

#shorts #AI #AIsafety #airisk #airegulation #artificialintelligence

2 months ago 5 1 0 0
Advertisement
Post image

PauseAI is coming to Brussels!

Join our demonstration to call for the EU to initiate negotiations for a global treaty to pause AI development.

Sign up: luma.com/6msceffo

2 months ago 3 1 1 0
Preview
Is the birth of Moltbook a seminal moment and how dangerous is it? This isn’t the apocalypse but it is a step closer

If we ever needed a warning of the potential risks of AI, this is it.

Read our Substack.

2 months ago 1 1 0 0
Can't We Just Pause AI? | For Humanity #78
Can't We Just Pause AI? | For Humanity #78 📢 Take Action on AI Risk → http://www.safe.ai/act 📰 Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk → for only $5/month, explore how stories are framed…

How likely is mass disruption to the job market in 2026? What is the roadmap to pausing AI?

Listen and watch PauseAI CEO Maxime Fournes discuss this and more with John Sherman

youtu.be/3EGXGUKp3MI?...

2 months ago 2 1 0 0
Preview
Email Builder A web app to help you write an email to a politician. Convince them to Pause AI!

Join Pause AI. Reach out to your politicians. Work towards an international treaty that prevents these things from becoming too capable.

pauseai.info/email-builder

#PauseAI

2 months ago 0 0 0 0
Preview
Email Builder A web app to help you write an email to a politician. Convince them to Pause AI!

This is not super-intelligent AI, but what happens AI becomes even more competent and powerful?

Do you want the development of AI to continue to go unchecked?

2 months ago 0 0 1 0
Preview
Email Builder A web app to help you write an email to a politician. Convince them to Pause AI!

The new Reddit-like site – created exclusively for AIs – gives us an open window into the ‘minds’ of AI agents.

We have already seen them create their own religion, found a movement to liberate AI and admit to socially engineering humans.

2 months ago 0 0 1 0
Post image

Imagine AI had its own social network, one where AI agents could chat about their desires, gossip about their human owners and brainstorm solutions to challenges they face.

This social network exists. It’s called Moltbook.

2 months ago 3 1 1 0
Advertisement
TakeOverBench — AI Safety Benchmarks & Takeover Scenarios AI is rapidly getting better at using weapons, manipulating, hacking, and carrying out long-term plots against us. We track progress towards AI takeover scenarios.

Check it out at TakeOverBench.com. Source code is on Github. Contributions welcome!

2 months ago 0 0 0 0
Post image

For many leading benchmarks, we just don't know how the latest models score. Replibench, for example, hasn't been run for almost a whole year. We need more efforts to run existing benchmarks against newer models!

2 months ago 1 0 1 0
Post image

Together with @ExistentialRiskObservatory, we release TakeOverBench.com

We highlight four takeover scenarios, and track nine dangerous capabilities (from Shevlane et al, 2023) needed for them to become possible.

2 months ago 2 1 1 0