Advertisement ยท 728 ร— 90

Posts by Michael Huang

Iโ€™m curious: have you tried employing lobbyists and government affairs staff to talk to governments. @controlai.com is doing fantastic work in this area. @pauseai.bsky.social is mobilising the public to contact their representatives. Laws and treaties can compel AI companies to comply with a pause.

2 months ago 3 0 1 0

Hmm ok, well not to throw a spanner in, but I'd bet pause comes into the overton window this year, even if it's not possible. Dario and Demis literally just said it at Davos.

I have a diagram I'll share with you that demonstrates why...

2 months ago 38 4 5 1
Preview
AI industry insiders launch site to poison the data that feeds them Poison Fountain project seeks allies to fight the power Alarmed by what companies are building with artificial intelligence models, a handful of industry insiders are calling for those opposed to the current state of affairs to undertake a mass data poisoning effort to undermine the technology.โ€ฆ

AI industry insiders launch site to poison the data that feeds them

3 months ago 40 26 3 6
Preview
Opinion | An Anti-A.I. Movement Is Coming. Which Party Will Lead It?

www.nytimes.com/2025/12/29/o...

3 months ago 85 19 10 4
Post image Post image

There have been a couple cool pieces up recently debunking the "China is racing on AI, so the US must too" narrative.

time.com/7308857/chin...

papers.ssrn.com/sol3/papers....

7 months ago 4 1 0 0
Post image

This is great, but will SB 53 be Congress-proof?

9 months ago 0 0 0 0
Post image

PRESS RELEASE: Accountable Tech Commends New York State Senate on Passage of RAISE Act, Urges Gov. Hochul to Sign: accountabletech.org/statements/a...

10 months ago 3 1 0 0
Advertisement

๐ŸšจNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐Ÿšจ

NYโ€™s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.

Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.

10 months ago 1 1 1 0

๐ŸšจNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐Ÿšจ

NYโ€™s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.

Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.

10 months ago 1 1 1 0
Video

Do you trust AI companies with your future?

Less than a year ago, Sam Altman said he wanted to see powerful AI regulated by an international agency to ensure "reasonable safety testing"

But now he says "maybe the companies themselves put together the right framework"

1 year ago 2 1 0 0
Video

Last year, half of OpenAI's safety researchers quit the company.

Sam Altman says "I would really point to our track record"

The track record: Superalignment team disbanded, FT reporting last week that OpenAI is cutting safety testing time down from months to just *days*.

1 year ago 4 2 0 0

China is taking advantage of this and NVIDIA is profiting. NVIDIA produced over 1M H20s in 2024 โ€” most going to China. Orders from ByteDance and Tencent have spiked following recent DeepSeek model releases.

Chinese AI runs on American tech that we freely give them! That's not "Art of the Deal"!

1 year ago 1 1 1 0
Video

AI godfather Geoffrey Hinton says in the next 5 to 20 years there's about a 50% chance that we'll have to confront the problem of AIs trying to take over.

1 year ago 3 1 0 0
Post image

Frontier AI models are more capable than they've ever been, and they're being rushed out faster than ever. Not a great combination!

OpenAI used to give staff months to safety test. Now it's just days, per great reporting from Cristina Criddle at the FT. ๐Ÿงต

1 year ago 5 2 2 0
Post image

FT: OpenAI are slashing the time and resources they're spending on safety testing their most powerful AIs.

Safety testers have only been given days to conduct evaluations.

One of the people testing o3 said "We had more thorough safety testing when [the technology] was less important"

1 year ago 2 2 1 0
Preview
ControlAI At ControlAI we are fighting to keep humanity in control.

NEW: We just launched a new US campaign to advocate for binding AI regulation!

We've made it super easy to contact your senator:
โ€” It takes just 60 seconds to fill our form
โ€” Your message goes directly to both of your senators

controlai.com/take-a...

1 year ago 2 1 0 0
Post image

12 ex-OpenAI employees just filed an amicus brief on the Elon Musk lawsuit attempting to block OpenAI from shedding nonprofit control.

The brief was filed by Harvard Law Professor Lawrence Lessig, who also reps OpenAI whistleblowers.

Here are the highlights ๐Ÿงต

1 year ago 20 6 1 4
Advertisement
Preview
Verifying Who Pulled the Trigger Can regulators know when autonomous weapons systems are being used?

Can regulators really know when AI is in charge of a weapon instead of a human? Zachary Kallenborn explains the principles of drone forensics.

1 year ago 56 13 1 1
Video

How likely is AI to annihilate humanity?
Elon Musk: "20% likely, maybe 10%"
Ted Cruz: "On what time frame?"
Elon Musk: "5 to 10 years"

1 year ago 3 2 0 0
Post image

With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.

That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".

๐Ÿงต 1/4

1 year ago 11 9 1 2

I introduced new AI safety & innovation legislation. Advances in AI are exciting & promising. They also bring risk. We need to embrace & democratize AI innovation while ensuring the people building AI models can speak out.

SB 53 does two things: ๐Ÿงต

1 year ago 29 2 2 3
Post image

๐Ÿ’ผ Excellent career opportunity from Lex International, who are hiring an Advocacy and Outreach Officer to help advance work towards a treaty on autonomous weapons.

โœ๏ธ Apply by January 10 at the link in the replies:

1 year ago 6 3 2 1

Nobel Prize winner Geoffrey Hinton thinks there is a 10-20% chance AI will "wipe us all out" and calls for regulation.

Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.

www.theguardian.com/technology/2...

1 year ago 1 1 0 0
Preview
โ€˜Godfather of AIโ€™ raises odds of the technology wiping out humanity over next 30 years Geoffrey Hinton says there is 10-20% chance AI will lead to human extinction in next three decades amid fast pace of change The British-Canadian computer scientist often touted as a โ€œgodfatherโ€ of artificial intelligence has raised the odds of AI wipingโ€ฆ

โ€˜Godfather of AIโ€™ raises odds of the technology wiping out humanity over next 30 years

1 year ago 167 78 34 69
Preview
Letter from renowned AI experts | SB 1047 - Safe & Secure AI Innovation

The tech industry would prefer that Hinton and other experts go away, since they tend to support AI regulation that the tech industry mostly opposes.

safesecureai.org/experts

1 year ago 1 0 0 0

Itโ€™s likely that Hinton lost money personally when he started warning about AI. He resigned from a Vice President position at Google. It would have been more lucrative for him to say nothing and continue his VP role there.

1 year ago 1 0 0 0
Advertisement
Post image

Have you heard about OpenAI's recent o1 model trying to avoid being shut down in safety evaluations? โฌ‡๏ธ

New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?

๐Ÿ”— Read it below:

1 year ago 5 2 1 2
International Conference on Large-Scale AI Risks

Iโ€™m excited to share the announcement of ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐‹๐š๐ซ๐ ๐ž-๐’๐œ๐š๐ฅ๐ž ๐€๐ˆ ๐‘๐ข๐ฌ๐ค๐ฌ. The conference will take place ๐Ÿ๐Ÿ”-๐Ÿ๐Ÿ–๐ญ๐ก ๐Œ๐š๐ฒ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ at the Institute of Philosophy of KU Leuven in ๐๐ž๐ฅ๐ ๐ข๐ฎ๐ฆ.

Our keynote speakers:
โ€ข Yoshua Bengio
โ€ข Dawn Song
โ€ข Iason Gabriel

Submit abstract by 15 February:

1 year ago 21 4 0 0

I am currently against humanity (or in fact, a couple of AI corporations) pursuing artificial general intelligence (AGI). While that view could change over time, I currently believe that a world with such powerful technologies is too fragile, and we should avoid pursuing that state altogether.

๐Ÿงต

1 year ago 11 2 1 0
Post image Post image

Your Bluesky Posts Are Probably In A Bunch of Datasets Now

After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other bigger datasets have appeared in its placeโ€”including one of almost 300 million posts.

๐Ÿ”— www.404media.co/bluesky-post...

1 year ago 197 78 15 31