Advertisement · 728 × 90

Posts by ℵ₁

Post image

In 2025, OpenAI announced Stargate, a $500 billion data center initiative. We surveyed all 7 US sites and found visible development at each.

There's a long road ahead, but the project appears on track to reach 9+ GW by 2029—comparable to New York City's peak power demand. 🧵

23 hours ago 18 2 2 4

Lolwut

3 days ago 1 0 0 0

It's such a shame Google Books is in a somewhat state of bit rot. It's such a great research tool.

4 days ago 1 0 0 0

I thought reach was the one thing Elon said people didn’t have a right to. Now reach is also something they won’t control?

5 days ago 0 0 0 0

What is going on here? And why is one wearing a diving helmet?

5 days ago 2 1 0 0
In Bruges scene where Colin Farrell is pointing a gun at his head and Brendon Gleeson is also pointing a gun at Colin Farrell's head

In Bruges scene where Colin Farrell is pointing a gun at his head and Brendon Gleeson is also pointing a gun at Colin Farrell's head

Current status of the Strait of Hormuz dispute

6 days ago 32489 6542 467 305
'' the cooler '' - official trailer 2002.
'' the cooler '' - official trailer 2002. YouTube video by trashtrailers

I think Vance may be The Cooler. Where else do we send him?

6 days ago 1 0 0 0

Aren’t all of the summarizing because they want to hide the thinking traces from Chinese labs?

6 days ago 0 0 1 0
Advertisement

Then they had to update the blog post because someone pointed out that the small models claimed the fixed function was still vulnerable.

If you are not assessing false positives, then the true positives are not that interesting, as the FPs will overwhelm the system.

6 days ago 11 0 0 0

As mentioned in the discussion over at X, this is nowhere close to an apples-to-applies comparison. They only gave the models vulnerable function and provided it with contextual hints.

6 days ago 9 0 2 0

Also check out the example they provide where an agent got frustrated and attempted to prompt inject the user. Amusing.

6 days ago 0 0 0 0
Preview
How we monitor internal coding agents for misalignment How OpenAI uses chain-of-thought monitoring to study misalignment in internal coding agents—analyzing real-world deployments to detect risks and strengthen AI safety safeguards.

This was an interesting post from OpenAI from last month. They monitor their internal coding agents for misalignment and find that rarely they do bad things, like leaking data externally or performing destructive actions. A good reminder to never run agents without guardrails.

6 days ago 0 0 1 0

It takes some major hubris to run for governor and think this was not going to come out.

1 week ago 1 0 0 0

Even more headfucking is that this has not been the case for all that long and, worse, won't carry on being the case for that much longer. Well, not that much longer in cosmology terms. 600 million years will probably see us out.

1 week ago 372 8 5 0

something that wrinkles my brain every time I remember it is the fact that total eclipses are only possible on earth because the moon and sun appear to be the same size in our sky, due to the insanely, astronomically unlikely fluke that the moon is 400x smaller than the sun but 400x closer to us.

1 week ago 6773 1112 79 138
Post image

Hall of fame FT correction

1 week ago 6780 1259 93 156
Viral photo from some years back of a man nonchalantly mowing his yard with a tornado on the horizon .

Viral photo from some years back of a man nonchalantly mowing his yard with a tornado on the horizon .

How it feels doing literally any task right now.

1 week ago 14659 3360 73 144
A beef Wellington is just a corn dog from a different socioeconomic background

A beef Wellington is just a corn dog from a different socioeconomic background

1 week ago 11733 1539 293 134
Advertisement
Post image

I was just talking to a friend today about Dolly Parton's Imagination Library - new books every month, totally for free, for kids ages 1-5.

My son was part of the program - this is a copy of the letter that came with his last book, right after his 5th birthday.

1 week ago 12786 2243 5 352

You can’t share purpose without shared facts.

Do you make America healthy by more or less vaccination? Depends on the facts surrounding vaccine safety.

1 week ago 1 0 1 0
The earth rises above the moon

The earth rises above the moon

The moon eclipses

The moon eclipses

Meanwhile, far away from all the worst people…

(New photos from the Artemis II mission released by NASA.)

1 week ago 575 111 20 5
Preview
Sam Altman May Control Our Future—Can He Be Trusted? New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.

The reporting on OpenAI and Sam Altman I've been working on for the past year and a half, for @newyorker.com, with Andrew Marantz: www.newyorker.com/magazine/202...

1 week ago 2616 866 150 236
Preview
A Cryptography Engineer’s Perspective on Quantum Computing Timelines The risk that cryptographically-relevant quantum computers materialize within the next few years is now high enough to be dispositive, unfortunately.

Two papers came out last week that suggest classical asymmetric cryptography might indeed be broken by quantum computers in just a few years.

That means we need to ship post-quantum crypto now, with the tools we have: ML-KEM and ML-DSA. I didn't think PQ auth was so urgent until recently.

1 week ago 297 123 10 19
Preview
GitHub Actions Security Flaws: Immutable Tags, Token Access, and More | Dan Lorenc posted on the topic | LinkedIn Yesterday I blamed the Trivy breach on GitHub. The design of Actions is plain irresponsible today and ignores a decade of supply chain security work from other ecosystems. Here's what they would have...

Dan Lorenc is exactly right:

"I blame[] the Trivy breach on GitHub. The design of Actions is plain irresponsible today and ignores a decade of supply chain security work from other ecosystems."

www.linkedin.com/posts/danlor...

2 weeks ago 32 5 2 0
Preview
Global sales of combustion engine cars have peaked To decarbonize road transport, the world must move away from petrol and diesel cars and towards electric vehicles and other forms of low-carbon transport.

For anyone wondering, global sales of internal combustion vehicles peaked in 2018. 🧪🔌💡☀️💨🔋 ourworldindata.org/data-insight...

8 months ago 147 36 3 9
Post image

This is my favorite climate change chart. Japanese monks, aristocrats, and emperors kept meticulous records of cherry blossom festivals for 1,200 years and accidentally built the world's longest climate dataset.

2 weeks ago 18106 6857 168 253
Advertisement
Post image Post image

Security research is being revolutionised with AI. A Claude prompt "Somebody told me there is an RCE 0-day when you open a file. Find it" actually identified a remote code execution in Vim, and Emacs. Hacking like it's 90s? blog.calif.io/p/mad-bugs-v...

2 weeks ago 31 8 0 1

The old (false) open source cliche that “many eyes make all bugs shallow” is becoming true for all software thanks to LLMs.

2 weeks ago 5 0 0 0

Cures nervousness, insomnia, asthma and eczema.

3 weeks ago 3 0 0 0
I didn't train a new model. I didn't merge weights.
I didn't run a single step of gradient descent.
What I did was much weirder: I took an existing
72-billion parameter model, duplicated a particular block of seven of its middle layers, and stitched the result back together. No weight was modified in the process. The model simply got extra copies of the layers it used for thinking.

I didn't train a new model. I didn't merge weights. I didn't run a single step of gradient descent. What I did was much weirder: I took an existing 72-billion parameter model, duplicated a particular block of seven of its middle layers, and stitched the result back together. No weight was modified in the process. The model simply got extra copies of the layers it used for thinking.

this is crazy — bro topped an LLM benchmark without changing weights at all

he sliced an LLM wide open, in the middle, and duplicated a block of ~7 layers

circuits ftw!

dnhkng.github.io/posts/rys/

3 weeks ago 270 36 13 25