Advertisement · 728 × 90

Posts by Existential Risk Observatory

Video

Maxime Fournes, Pause AI CEO, addressing MEPs in Brussels.
Watch the intervention in full: www.youtube.com/watch?v=aeLz...

5 days ago 4 2 0 0
Post image

The planet's largest AI summit starts on Monday in India. Will AI safety be on the agenda?

Sign our petition to demand that it is.
www.change.org/p/ai-summits...

#aisafety #aigovernance #artificialintelligence #ai

2 months ago 3 2 0 0
Sen. Bernie Sanders' AI warning
Sen. Bernie Sanders' AI warning YouTube video by CNN

This seems like an obvious political chance. It is hopeful that @sanders.senate.gov is on the ball here. We're waiting for others to follow.

youtu.be/zJHYVzB4Nu0?...

3 months ago 1 0 0 0

Obviously, we will need to tax AI companies, data centers, and other automated companies, and use this money to provide a high living standard (at least a UBI) for all. It is crucial to set minimum tax rates in international treaties to make sure this is globally achievable.

3 months ago 0 0 1 0
Post image

If and when we'll get AGI, if we do not directly go extinct, one major problem will be how to divide income.

AGI and robotics would likely make us all become unemployed.

3 months ago 0 0 1 0
Post image

This Christmas, consider funding a PauseAI volunteer.

4 months ago 3 2 1 0
Video

MIRI CEO Malo Bourgon explains why AI isn't like other technologies, and why it looks likely that superintelligence will be developed much earlier than previously thought:

4 months ago 3 2 0 0

Xriskers should see the obvious and campaign together with those concerned about data centers, aiming for xrisk awareness raising and getting good regulation implemented.

4 months ago 1 0 0 0

AI using water and energy that was made for human beings is an obvious resource conflict, too. There's a continuum straight from these issues to human replacement and eventually human extinction. The more powerful AI gets, the faster this will go.

4 months ago 0 0 1 0

Our core concern is humanity getting replaced by AI. Gradual disempowerment is one scenario many worry about. What failure looks like, where factories start sucking up our oxygen, is another. Even the classic paperclip maximizer scenario is a resource conflict at heart.

4 months ago 0 0 1 0
Advertisement

Already, these issues are big enough for politicians from left to right to win elections on. Xriskers can read an exponential curve. If this is true today, imagine what AI politics will look like five years from now!

4 months ago 0 0 1 0
Preview
View: Trump’s AI agenda sails toward an iceberg of bipartisan populist fury The AI industry’s new super PAC picked its first political target this month — and missed.

So far, most xriskers have felt too good for anti-data center campaigning. We made fun of data center water usage and electricity consumption, even though these are actual problems.

4 months ago 0 0 1 0

This trial will be aimed at @stopai.bsky.social, but we all know that Sam Altman is the one doing what should really be illegal.

Congratulations to StopAI for making this happen!

5 months ago 0 0 0 0

Debating this absurd situation in public is badly needed. It's an even better idea to do so with one of the worst perpetrators, who has time and again tried to build exactly the kind of AI that could kill us all, and who has time and again lobbied hard against any regulation aiming to keep us safe.

5 months ago 0 0 1 0

Sometimes, it is hard to believe that this is all real. Are people really building a machine that could be about to kill every living thing on this planet? If this is not true, why are the best scientists in the world saying it is? If this is true, why is no one trying to do anything about it?

5 months ago 0 0 1 0

If one in ten experts think there is a risk of human extinction when developing a technology, we should not develop this technology, until we are confident that the risk can be almost ruled out.

9 months ago 0 0 0 0
Preview
Can a small startup prevent AI loss of control? - with Riccardo Varenna · Luma According to many leading AI researchers, there is a chance we could lose control over future AI. We think one of the most important challenges of our century…

📢 Event coming up in Amsterdam!📢

Many think we should have an AI safety treaty, but how to enforce it?🤔

Riccardo Varenna from TamperSec has part of a solution: sealing hardware within a secure enclosure. Their proto should be ready within three months.

Time to hear more!

Be there! lu.ma/v2us0gtr

10 months ago 0 0 0 0
Post image

BREAKING: New experiments by former OpenAI researcher Steven Adler find that GPT-4o will prioritize preserving itself over the safety of its users.

Adler set up a scenario where the AI believed it was a scuba diving assistant, monitoring user vitals and assisting them with decisions.

10 months ago 1 1 1 0
Humans "no longer needed" - Godfather of AI | 30 with Guyon Espiner S3 Ep 9 | RNZ
Humans "no longer needed" - Godfather of AI | 30 with Guyon Espiner S3 Ep 9 | RNZ YouTube video by RNZ

youtu.be/uuOPOO90NBo?... 15:15

10 months ago 0 0 0 0
Advertisement

Slowly, but surely, the public is getting informed that there is a level of AI that may kill everyone. And obviously, an informed public is not going to let that happen.

Never mind SB1047. In the end, we will win.

10 months ago 0 0 1 0

What is interesting is that the presenter assumes familiarity with not only the possibility that AI could cause our extinction, but also the fact that many experts think there is an appreciable chance this may actually happen.

10 months ago 1 0 1 0
Post image

Two weeks ago, Geoffrey Hinton informed a New Zealand audience that AI could kill their children. The presenter announced the part as: "They call it p(doom), don't they, the probability that AI could wipe us out. On the BBC recently you gave it a 10-20% chance".

10 months ago 1 0 1 0

The closer we get to actual AI, the less people like intelligence, however measured. Passing the Turing test is downplayed now, but passing Marcus' Simpsons test will be downplayed later when it happens, too.

Still, AI reaching human level is actually important. We can't keep our heads in the sand.

1 year ago 1 1 0 0

More info and discussion here:
forum.effectivealtruism.org/posts/XJuPEy...
www.lesswrong.com/posts/sc4Kh5...

1 year ago 0 0 0 0

- Offense/defense balance. Many seem to rely on this balance favoring defense, but so far little work has been done on aiming to determine whether this assumption holds, and in fleshing out what such defense could look like. A follow-up research project could be to shed light on these questions.

1 year ago 0 0 1 0

Our follow-up research might include:

- Systemic risks, such as gradual disempowerment, geopolitical risks (see e.g. MAIM), mass unemployment, stable extreme inequality, planetary boundaries and climate, and others.

1 year ago 0 0 1 0

- Require security and governance audits for developers of models above the threshold.
- Impose reporting requirements and Know-Your-Customer requirements on cloud compute providers.
- Verify implementation via oversight of the compute supply chain.

1 year ago 0 0 1 0

Based on our review, our treaty recommendations are:

- Establish a compute threshold above which development should be regulated.
- Require “model audits” (evaluations and red-teaming) for models above the threshold.

1 year ago 0 0 1 0
Preview
International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty The malicious use or malfunction of advanced general-purpose AI (GPAI) poses risks that, according to leading experts, could lead to the 'marginalisation or extinction of humanity.' To address these r...

Our paper "International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty" focuses on risk thresholds, types of international agreement, building scientific consensus, standardisation, auditing, verification and incentivisation.

arxiv.org/abs/2503.18956

1 year ago 0 0 1 0
Advertisement

New paper out!📜🚀

Many think there should be an AI Safety Treaty, but what should it look like?🤔

Our paper starts with a review of current treaty proposals, and then gives its own Conditional AI Safety Treaty recommendations.

1 year ago 2 1 1 0