I made a market about this: manifold.markets/jim/will-ant...
Posts by jim
Jimfund's hypothesis that company revenue increases superlinearly in time horizon (with a lag) (to state it roughly) is currently the subject of a natural experiment. Opus 4.6 increased upon the previous SOTA by a factor of 2.04x. We should see Anth revenue growth accelerate compared to rate YTD.
Not surprising that Anthropic is on a trajectory to pass OpenAI in revenue run-rate (and may already have done so) given that (according to some reasoning I have done) frontier lab revenue should increase superlinearly in time horizon and Anth has scaled time-horizon 35x since Jan 2025 vs OA's 9x.
GDM has all the compute in the world, presumably they're working on a larger model and are continuing to get better at post-training etc. but things seem pretty opaque ATM. They're kind of my favourite to win but depends on making some right decisions rn which I'm 50% sure they won't.
To recall the question we're considering:
> is Anthropic ahead of other labs?
OA concentrating compute efforts on automating research. It has been competing with Anthropic with smaller models than Opus so far, but it has Spud coming out soon. I think OpenAI is very much "in the game" but not favored
He's very focused on automating research (which is obviously the right thing to be focused on).
54:21 "I don't think anybody wants to watch an AI come interview people like me". Seems wrong. For the most part the important personality in an interview is the interviewee not the interviewer. AI is in theory better at being knowledgeable about the subject and object of the interview.
At 28:35 he talks about iterative deployment, "putting the technology out early and often". He mentions some doubts about the effectiveness of this strategy but sounds like he still believes in it now. Points toward OA not holding back Spud models (modulus compute constraints).
At 17:00 he talks about how he did not expect 3 or 6 months ago to be at this point where something big is about to happen again. This is interesting because that's when he did the OpenAI livestream where he made all these statements about how OpenAI was cooking up so many great things etc.
I should listen to the two major priorities part closely and attempt to discern whether this is something they're working on now or just working toward or have in mind for the future.
At 11:35 he talks about how model usage explodes after the release of new models. He talks about how the latest generation of models has totally changed OpenAI's workflow. OpenAI has two major priorities: (1) automated researcher (2) automated companies.
OK, and is Anthropic ahead of other labs? Let's watch this recent Sam Altman interview to see if we can gather any insight from it: youtu.be/mJSnn0GZmls
Anth revenue should increase even more rapidly going forward. Assuming a 1.9 month doubling time... but faster because superlinearity... call it 1.5 months...
In addition, we have the increased productivity from the more powerful models. How much does a ~60% increase in total factor productivity decrease doubling time? Probably the doubling time is now about 1.9 months (median of immediate-future doubling time—mean is obviously way lower)
In theory, Google's Ironwood TPUs (coming online from late 2025, GW scale in 2026) could efficiently serve models much greater in size than Mythos (maybe 80T–100T total params?). So, we could continue scaling along this axis would imply a sustained increase in pace.
Suppose that Opus 4.6 had a time-horizon of 12 hours, AI researcher uplift 20%. Suppose that Mythos has a time horizon of 24 hours. Uplift should be 50–70%. How is doubling-time effected? Is Mythos a one-time increase in time-horizon, or has it set pace for the near-future?
Recall that model utility is increasing superlinearly in time horizon. So, if time horizon has just jumped significantly, we must expect that the utility of AI use has just increased significantly.
> Mid-August [...] A lot of new hardware is online. Doubling time shrinks to 1.5 months.
It seems that we might be a bit ahead of schedule, although things are still uncertain. But let's assume that we are in this timeline. What does the near-future look like?
Firstly, this was predictable. We knew better hardware was coming online in 2026 which would enable labs to train and serve bigger models and that this would lead to a decrease in doubling-time. I wrote about this in my first 2026 dispatch: jimfund.com#2026
Seems like it's just a larger travel. To speculate: pre-trained on Trainium 2, post-trained using Ironwood, served using Ironwood. So, if it's a step change, what are the implications?
So, does Mythos represent a step-change in AI capabilities? The capabilities jump certainly seems to be above trend, with significant jumps in pretty much all of the important benchmarks. Why is it above trend? Is it because it is a larger model? Or because of some algorithmic breakthrough?
'Off-trend' is an ugly phrase. Let's think of something better.
- Does Mythos represent a step-change?
- Are Mythos' capabilities above-trend?
- Did Mythos break the trendline?
Is Mythos 'off trend'? and if so was it off-trend in a surprising way? and if so does it imply that Anthropic is ahead of other labs? OpenAI has Spud, is it going to be as good as Mythos? Or Better? Will it be released publicly? Does Google have a model of the same class?
Let's write something about AI. I don't have anything in particular on my mind, but a lot of interesting things are happening in AI at the moment (as is always the case these days). So, what's there to think about? Anthropic's Mythos model is news, so I will write down some thoughts on it.
please read jimfund.com
I wrote a story about the future jimfund.com#fiction
My predictions for METR's developer uplift survey came in right on target, but I noticed that the question resolves not to this, but to the last such study whose results are released in 2026... oops.
Today's jimfund blog post is a rebuttal to the Citrini piece.
jimfund.com#2026-iv