Advertisement · 728 × 90

Posts by jim

Post image

I made a market about this: manifold.markets/jim/will-ant...

1 day ago 0 0 0 0

Jimfund's hypothesis that company revenue increases superlinearly in time horizon (with a lag) (to state it roughly) is currently the subject of a natural experiment. Opus 4.6 increased upon the previous SOTA by a factor of 2.04x. We should see Anth revenue growth accelerate compared to rate YTD.

1 day ago 0 0 1 0
Post image

Not surprising that Anthropic is on a trajectory to pass OpenAI in revenue run-rate (and may already have done so) given that (according to some reasoning I have done) frontier lab revenue should increase superlinearly in time horizon and Anth has scaled time-horizon 35x since Jan 2025 vs OA's 9x.

6 days ago 1 0 0 0

GDM has all the compute in the world, presumably they're working on a larger model and are continuing to get better at post-training etc. but things seem pretty opaque ATM. They're kind of my favourite to win but depends on making some right decisions rn which I'm 50% sure they won't.

1 week ago 0 0 0 0

To recall the question we're considering:
> is Anthropic ahead of other labs?
OA concentrating compute efforts on automating research. It has been competing with Anthropic with smaller models than Opus so far, but it has Spud coming out soon. I think OpenAI is very much "in the game" but not favored

1 week ago 0 0 1 0
Post image
1 week ago 0 0 1 0

He's very focused on automating research (which is obviously the right thing to be focused on).

1 week ago 0 0 1 0

54:21 "I don't think anybody wants to watch an AI come interview people like me". Seems wrong. For the most part the important personality in an interview is the interviewee not the interviewer. AI is in theory better at being knowledgeable about the subject and object of the interview.

1 week ago 0 0 1 0

At 28:35 he talks about iterative deployment, "putting the technology out early and often". He mentions some doubts about the effectiveness of this strategy but sounds like he still believes in it now. Points toward OA not holding back Spud models (modulus compute constraints).

1 week ago 1 0 1 0
Advertisement

At 17:00 he talks about how he did not expect 3 or 6 months ago to be at this point where something big is about to happen again. This is interesting because that's when he did the OpenAI livestream where he made all these statements about how OpenAI was cooking up so many great things etc.

1 week ago 0 0 1 0

I should listen to the two major priorities part closely and attempt to discern whether this is something they're working on now or just working toward or have in mind for the future.

1 week ago 0 0 1 0

At 11:35 he talks about how model usage explodes after the release of new models. He talks about how the latest generation of models has totally changed OpenAI's workflow. OpenAI has two major priorities: (1) automated researcher (2) automated companies.

1 week ago 0 0 1 0
Post image

OK, and is Anthropic ahead of other labs? Let's watch this recent Sam Altman interview to see if we can gather any insight from it: youtu.be/mJSnn0GZmls

1 week ago 0 0 1 0
Post image

Anth revenue should increase even more rapidly going forward. Assuming a 1.9 month doubling time... but faster because superlinearity... call it 1.5 months...

1 week ago 0 0 1 0

In addition, we have the increased productivity from the more powerful models. How much does a ~60% increase in total factor productivity decrease doubling time? Probably the doubling time is now about 1.9 months (median of immediate-future doubling time—mean is obviously way lower)

1 week ago 1 0 1 0

In theory, Google's Ironwood TPUs (coming online from late 2025, GW scale in 2026) could efficiently serve models much greater in size than Mythos (maybe 80T–100T total params?). So, we could continue scaling along this axis would imply a sustained increase in pace.

1 week ago 0 0 1 0

Suppose that Opus 4.6 had a time-horizon of 12 hours, AI researcher uplift 20%. Suppose that Mythos has a time horizon of 24 hours. Uplift should be 50–70%. How is doubling-time effected? Is Mythos a one-time increase in time-horizon, or has it set pace for the near-future?

1 week ago 0 0 1 0
Advertisement

Recall that model utility is increasing superlinearly in time horizon. So, if time horizon has just jumped significantly, we must expect that the utility of AI use has just increased significantly.

1 week ago 0 0 1 0

> Mid-August [...] A lot of new hardware is online. Doubling time shrinks to 1.5 months.
It seems that we might be a bit ahead of schedule, although things are still uncertain. But let's assume that we are in this timeline. What does the near-future look like?

1 week ago 0 0 1 0

Firstly, this was predictable. We knew better hardware was coming online in 2026 which would enable labs to train and serve bigger models and that this would lead to a decrease in doubling-time. I wrote about this in my first 2026 dispatch: jimfund.com#2026

1 week ago 0 0 1 0

Seems like it's just a larger travel. To speculate: pre-trained on Trainium 2, post-trained using Ironwood, served using Ironwood. So, if it's a step change, what are the implications?

1 week ago 0 0 1 0

So, does Mythos represent a step-change in AI capabilities? The capabilities jump certainly seems to be above trend, with significant jumps in pretty much all of the important benchmarks. Why is it above trend? Is it because it is a larger model? Or because of some algorithmic breakthrough?

1 week ago 0 0 1 0

'Off-trend' is an ugly phrase. Let's think of something better.
- Does Mythos represent a step-change?
- Are Mythos' capabilities above-trend?
- Did Mythos break the trendline?

1 week ago 0 0 1 0

Is Mythos 'off trend'? and if so was it off-trend in a surprising way? and if so does it imply that Anthropic is ahead of other labs? OpenAI has Spud, is it going to be as good as Mythos? Or Better? Will it be released publicly? Does Google have a model of the same class?

1 week ago 0 0 1 0

Let's write something about AI. I don't have anything in particular on my mind, but a lot of interesting things are happening in AI at the moment (as is always the case these days). So, what's there to think about? Anthropic's Mythos model is news, so I will write down some thoughts on it.

1 week ago 0 0 1 0
Post image

please read jimfund.com

1 month ago 0 0 0 0
Advertisement
Post image

I wrote a story about the future jimfund.com#fiction

1 month ago 0 0 0 0

My predictions for METR's developer uplift survey came in right on target, but I noticed that the question resolves not to this, but to the last such study whose results are released in 2026... oops.

1 month ago 0 0 0 0
Post image

Today's jimfund blog post is a rebuttal to the Citrini piece.
jimfund.com#2026-iv

1 month ago 0 0 0 0
Preview
Mathematics in the Library of Babel — Daniel Litt Mathematics isn't only about saying true things. It's about asking the right questions, being confused, stumbling about, getting distracted, being wrong, recognizing when you're wrong, being stuck. Mo...

Some thoughts on AI and math, inspired by “First Proof”: www.daniellitt.com/blog/2026/2/...

1 month ago 88 27 1 12