Advertisement · 728 × 90

Posts by Daniel Eth (yes, Eth is my actual last name)

Why do people think the movie Jurassic Park holds special wisdom, rather than just being an entertaining film? This is not a serious argument against resurrecting dinosaurs.

5 days ago 0 0 2 0
Post image

Interesting emerging narrative

1 month ago 0 1 0 0
Post image

If anyone’s wondering what Marc Andreessen is up to, he recently posted this on Twitter:

1 month ago 5 0 0 0

“A mind is a terrible thing to waste” -UNCF

“A mind is a terrible thing” -Marc Andreessen

1 month ago 0 0 0 0
Post image
1 month ago 26 0 0 0
Post image

In a large upset, LTF (the OpenAI-Andreessen super PAC) takes a major loss in IL-02, where they backed Jesse Jackson Jr. Notably Jackson is famously corrupt, and I wonder if LTF’s toxic AI money fed into existing negative sentiments towards him.

1 month ago 4 0 1 0

I see Marc Andreessen has progressed from criticizing EAs and Catholics and is now railing against [checks notes] people who think

1 month ago 41 3 2 0
Advertisement

Marc Andreessen losing all his money by investing in all the wrong AI companies - call that AI safety by default

3 months ago 19 0 1 0

If you can substitute "hungry ghost trapped in a jar" for "AI" in a sentence it's probably a valid use case for LLMs. Take "I have a bunch of hungry ghosts in jars, they mainly write SQL queries for me". Sure. Reasonable use case.

"My girlfriend is a hungry ghost I trapped in a jar"? No. Deranged.

8 months ago 2844 684 44 70

This *might* be an indication that Anthropic has gotten better at getting models to do longer tasks, specifically. If so, this could be the first signs that they’ve solved/are solving a complex bottleneck to more complex tasks. Or not. Unclear. But if so, that’s a big deal!

3 months ago 8 0 0 0
Post image

Third, the curve for Claude Opus 4.5 is “flatter” than previous models (it does relatively better at longer tasks compared to shorter). And the longest tasks it does are ones where it’s getting ~50%, b/c METR doesn’t have enough tasks that are long enough in their dataset…

3 months ago 5 0 1 0

You could argue we’re on a 4-month doubling time now instead of 7-month doubling time (I remain uncertain of what to expect over the next year), but regardless this is a continuation of previous progress, not a discontinuity

3 months ago 4 0 1 0
Post image

Second, on a log plot, note this is hardly above trend. Sure, it *could* represent a new trend, but it seems like every time there’s a model release that overperforms people think timelines get super short, & every time a model underperforms they think timelines get super long…

3 months ago 5 0 1 0
Post image

A few thoughts on Claude Opus 4.5:

First off, in absolute terms, this is a pretty big step up. Anthropic is showing they have juice, and things are going faster than previously expected. At the very least, this should dispel all recent talk about how AI was entering a slowdown

3 months ago 15 1 2 0
Post image

A reminder that we're hiring for several really important roles at Coefficient Giving! Learn more here: coefficientgiving.org/about-us/ca...

3 months ago 8 1 0 0
Post image

lol

3 months ago 21 5 1 0

AI accelerationists are in a bit of a bind, in that their views are deeply unpopular; by aggressively fighting for them they also raise the salience of AI politically, which hurts their cause

3 months ago 4 0 1 0
Advertisement
Post image

Notably, the public has also shifted away from Republicans on the issue (up on the graph), coinciding with many Republicans pushing an anti-regulatory attitude towards AI. Voter now trust Dems & Rs about equally on the issue, indicating voters are up for grabs by either party

3 months ago 4 0 1 0
Post image

There’s still a long ways to go before AI is a top voter concern like health care or cost of living, but I’d expect this trend to continue as AI becomes more powerful. Politicians who side with wealthy tech donors over voter preferences may wind up regretting that decision

3 months ago 4 0 1 0
Post image

Graph showing AI becoming higher salience to voters (more to the right on the graph). According to this data, AI is now higher salience than climate change, and approaching the salience of gas prices

3 months ago 7 3 1 0

I wonder if this is related to Trump’s recent shift from not caring about AI preemption to heavily pushing it. The OpenAI-Andreessen super PAC can spook rank-and-file members of Congress, but donating to Trump’s super PAC would build a stronger relationship w/ Trump, specifically

3 months ago 3 0 0 0

Oh wow - OpenAI’s Greg Brockman was the single largest donor to Trump’s super PAC over the past 6 months. I knew OpenAI/Brockman were trying to flex their muscles politically to block all meaningful AI regulations, didn’t realize they had literally become Trump’s largest donor

3 months ago 8 1 2 0
Post image Post image

For those who aren’t following the details, here are the relevant connections:

4 months ago 3 0 0 0
Post image

I think more OpenAI employees should be aware of the very bad-faith political activities that OpenAI is supporting through Greg Brockman’s funding of the Andreessen-OpenAI super PAC cluster

(Twitter’s location verification has a known bug, but Leamer doesn’t care about the truth.)

4 months ago 7 0 1 0

I think further advancements may overcome these challenges, the way that reasoning models overcame previous challenges associated with reasoning. I don’t think the clearest shot toward AGI is literally just scaling up LLMs, but instead a combination of scale and modifications on current methods

4 months ago 1 0 1 0

Now, I do think the automated AI R&D feedback loop will *eventually* speed things up a ton, but I don’t think this has really kicked off yet

4 months ago 5 0 1 0
Advertisement

Meanwhile, various people predicted the trend was about to (or already did) become faster, e.g., due to paradigm shifts with reasoning models. I think those people's predictions were also off.

4 months ago 5 0 1 0
Post image

Viewing the graph on a linear scale demonstrate that claims of AI "hitting a wall" are clearly off. People *keep making* these claims, but while not every model release lives up to hype, no, AI has not hit a wall yet, and there's no indication it's about to, either

4 months ago 5 1 1 0
Post image

Things are looking smoothly exponential for AI over the past several years, and I continue to think this is the best default assumption (until the AI R&D automation feedback loop eventually speeds everything up)

4 months ago 16 2 3 1

Republicans already tried to ban states’ ability to regulate AI in their Big, Ugly Bill.

That ban was voted down 99-1.

Their new political maneuver would be a free pass to Big Tech, and it must be stopped again.

5 months ago 22 4 11 2