Advertisement · 728 × 90

Posts by James Rosen-Birch

We need to talk about the people who still think “AI will just go away”.

6 days ago 0 0 0 0

Even the pump-and-dumps never went away — they just got formalized into Gambling On Everything With Insider Trading Characteristics

6 days ago 0 0 0 0

Also, the latter assumption is often based on the fact the crypto bubble popped. But crypto’s still around and pouring billions into political lobbying. So even that assumption is based on a kind of lack of object permanence (‘if it’s not in front of me, it doesn’t exist’)

6 days ago 0 0 1 0

But this idea still kicking around that a bubble will pop and everything will just go back to the way it was? More than a little ridiculous.

6 days ago 0 0 1 0

It may just turn out long-term to be a cost centre under engineering rather than a core revenue engine, and, in the bearmost case, something only homebuild hackers, HNWs who pay corporate rates for personal access, and people in enterprise get regular access to.

6 days ago 0 0 1 0

The underlying tech is useful for a range of tasks, there are no easy substitutes for ‘context-aware natural language search’ or ‘synthetic text generation’, and they will continue to be used (like earlier gens of neural nets) for as long as the need exists and no better solutions are present.

6 days ago 0 0 1 0

And even should such a failure happen, the idea that the IP, talent, and assets *won’t* be acquired by another business with an established revenue engine who will continue R&D and commercial application (just with greater engineering and financial discipline) is downright silly.

6 days ago 0 0 1 0
Advertisement

Google, Microsoft, and Meta, like Alibaba, Tencent, Baidu, Bytedance, and High-Flyer, are still going to do LLMs even in the very unlikely event the upstart labs completely fail.

6 days ago 0 0 1 1

we made being a charlatan a viable career path — TV talking heads, opinion columnists; even journalism proper has become little more or less than industrialized charlatanism

2 weeks ago 321 62 6 3
Preview
The FT’s AI optimism rests on shaky science Last weekend, the Financial Times published an article about the broader social impacts of AI. Studying, managing, and mitigating the negative impacts of the technological transition is supremely impo...

So the FT has this in-house data guy who makes a lot of shit up and vibe-stats his way to justifying his weird ideological whims

most recently he wrote a piece on how AI would make everyone centrist

I wrote about it, and for you methods nerds, you will never believe it

unherd.com/newsroom/the...

2 weeks ago 2 0 1 0

Great piece in FT on the crisis of humanitarianism.

archive.is/OBSGt

2 months ago 0 0 0 0
Post image

re: polls, this the most recent data from this summer

3 months ago 0 0 0 0
Post image Post image

things have changed a lot recently, kevin. these were in our elite paper in the past couple days.

3 months ago 0 0 1 0

more importantly, there’s a deep conceit to dismissing an entire country’s growing concerns about threats to their sovereignty from an increasingly aggressive neighbour

3 months ago 0 0 1 0

like under what threat model is it not possible, DoD has been writing plans for decades on how they’d take every country on the planet

3 months ago 0 0 1 0

I think you have your head in the sand, assume there’s even remotely force parity, and assume the risk of a handful of Americans being captured in Canada is equivalent to senior Canadian staff being taken in the US.

also, the moment troops start moving around it is way too late to do anything.

3 months ago 0 0 1 0
Advertisement

I also think where you see 4D chess and deliberate distraction, we see an extended effort to shift the overton window, normalize the possibility of seizing new territory, and warm the population to the prospect over time

3 months ago 1 0 0 0

I don’t think you grasp the seriousness here, or how little telegraphing matters, or how joint command makes Canadian officers easy to seize and capture

3 months ago 0 0 2 0

for us it was never about the imminence of an invasion so much as it was about the need to respond to a very significant change in US posture and global security architecture that could easily escalate to acting on stated claims, for which we need to massively alter our economy and society

3 months ago 2 0 1 0

I don’t think any targeted military action would involve conscripting academics, no

3 months ago 1 0 1 0

I am sure as an American it is very nice to be able to just say “nah he doesn’t mean it” but that’s just not a stance you can reasonably take when you’re on the receiving side

3 months ago 1 0 1 0

by the time there aren’t it will be too late to prepare, which is what we’re doing

3 months ago 0 0 1 0

we have substantive reason to be concerned up here, kevin

3 months ago 1 0 1 0

truly we have an object permanence problem

4 months ago 0 0 0 0
Advertisement

the crypto bubble taught a bunch of people that if they just plug their ears and yell ‘fake’ for long enough, things they don’t like will just go away

(even if those things are still very much still around and the people behind it are funding a superpac of unprecedented size)

4 months ago 3 0 1 0
Preview
Context Widows or, of GPUs, LPUs, and Goal Displacement

Nature would do well to publish more content like this thoughtful piece from @kevinbaker.bsky.social and fewer Buzzfeed listicles gussied up as career advice "Five productivity hacks for using AI in your scientific workflow"

4 months ago 130 39 5 15

It is an amazing piece!

4 months ago 1 0 0 0

the most striking part is America is pursuing something whole-hog it hasn’t even clearly defined

5 months ago 1 0 0 0

at least Star Wars was deliberate in the 80s, but now? oof.

5 months ago 0 0 1 0

the fact China seems to have a much clearer idea of the state of AI and where present tech is on the innovation curve than their American counterparts (who are still wholly consumed by the hype machine) is profoundly concerning, even in the event they *are* a little behind

5 months ago 0 0 1 0