Why do people think the movie Jurassic Park holds special wisdom, rather than just being an entertaining film? This is not a serious argument against resurrecting dinosaurs.
Posts by Daniel Eth (yes, Eth is my actual last name)
Interesting emerging narrative
If anyone’s wondering what Marc Andreessen is up to, he recently posted this on Twitter:
“A mind is a terrible thing to waste” -UNCF
“A mind is a terrible thing” -Marc Andreessen
In a large upset, LTF (the OpenAI-Andreessen super PAC) takes a major loss in IL-02, where they backed Jesse Jackson Jr. Notably Jackson is famously corrupt, and I wonder if LTF’s toxic AI money fed into existing negative sentiments towards him.
I see Marc Andreessen has progressed from criticizing EAs and Catholics and is now railing against [checks notes] people who think
Marc Andreessen losing all his money by investing in all the wrong AI companies - call that AI safety by default
If you can substitute "hungry ghost trapped in a jar" for "AI" in a sentence it's probably a valid use case for LLMs. Take "I have a bunch of hungry ghosts in jars, they mainly write SQL queries for me". Sure. Reasonable use case.
"My girlfriend is a hungry ghost I trapped in a jar"? No. Deranged.
This *might* be an indication that Anthropic has gotten better at getting models to do longer tasks, specifically. If so, this could be the first signs that they’ve solved/are solving a complex bottleneck to more complex tasks. Or not. Unclear. But if so, that’s a big deal!
Third, the curve for Claude Opus 4.5 is “flatter” than previous models (it does relatively better at longer tasks compared to shorter). And the longest tasks it does are ones where it’s getting ~50%, b/c METR doesn’t have enough tasks that are long enough in their dataset…
You could argue we’re on a 4-month doubling time now instead of 7-month doubling time (I remain uncertain of what to expect over the next year), but regardless this is a continuation of previous progress, not a discontinuity
Second, on a log plot, note this is hardly above trend. Sure, it *could* represent a new trend, but it seems like every time there’s a model release that overperforms people think timelines get super short, & every time a model underperforms they think timelines get super long…
A few thoughts on Claude Opus 4.5:
First off, in absolute terms, this is a pretty big step up. Anthropic is showing they have juice, and things are going faster than previously expected. At the very least, this should dispel all recent talk about how AI was entering a slowdown
A reminder that we're hiring for several really important roles at Coefficient Giving! Learn more here: coefficientgiving.org/about-us/ca...
lol
AI accelerationists are in a bit of a bind, in that their views are deeply unpopular; by aggressively fighting for them they also raise the salience of AI politically, which hurts their cause
Notably, the public has also shifted away from Republicans on the issue (up on the graph), coinciding with many Republicans pushing an anti-regulatory attitude towards AI. Voter now trust Dems & Rs about equally on the issue, indicating voters are up for grabs by either party
There’s still a long ways to go before AI is a top voter concern like health care or cost of living, but I’d expect this trend to continue as AI becomes more powerful. Politicians who side with wealthy tech donors over voter preferences may wind up regretting that decision
Graph showing AI becoming higher salience to voters (more to the right on the graph). According to this data, AI is now higher salience than climate change, and approaching the salience of gas prices
I wonder if this is related to Trump’s recent shift from not caring about AI preemption to heavily pushing it. The OpenAI-Andreessen super PAC can spook rank-and-file members of Congress, but donating to Trump’s super PAC would build a stronger relationship w/ Trump, specifically
Oh wow - OpenAI’s Greg Brockman was the single largest donor to Trump’s super PAC over the past 6 months. I knew OpenAI/Brockman were trying to flex their muscles politically to block all meaningful AI regulations, didn’t realize they had literally become Trump’s largest donor
For those who aren’t following the details, here are the relevant connections:
I think more OpenAI employees should be aware of the very bad-faith political activities that OpenAI is supporting through Greg Brockman’s funding of the Andreessen-OpenAI super PAC cluster
(Twitter’s location verification has a known bug, but Leamer doesn’t care about the truth.)
I think further advancements may overcome these challenges, the way that reasoning models overcame previous challenges associated with reasoning. I don’t think the clearest shot toward AGI is literally just scaling up LLMs, but instead a combination of scale and modifications on current methods
Now, I do think the automated AI R&D feedback loop will *eventually* speed things up a ton, but I don’t think this has really kicked off yet
Meanwhile, various people predicted the trend was about to (or already did) become faster, e.g., due to paradigm shifts with reasoning models. I think those people's predictions were also off.
Viewing the graph on a linear scale demonstrate that claims of AI "hitting a wall" are clearly off. People *keep making* these claims, but while not every model release lives up to hype, no, AI has not hit a wall yet, and there's no indication it's about to, either
Things are looking smoothly exponential for AI over the past several years, and I continue to think this is the best default assumption (until the AI R&D automation feedback loop eventually speeds everything up)
Republicans already tried to ban states’ ability to regulate AI in their Big, Ugly Bill.
That ban was voted down 99-1.
Their new political maneuver would be a free pass to Big Tech, and it must be stopped again.