Advertisement · 728 × 90

Posts by Vincent Carchidi

Seems to this lowly layman that if you want to distance yourself from "human exceptionalism," you need to test the models through means other than these disgusting homosapien benchmarks.

5 minutes ago 2 0 0 0

I understand the tendency to think people critical of GenAI are given to some kind of "human exceptionalism," but the only basis for claiming GenAI has sophisticated intellectual/cognitive/what-have-you abilities is its performance on characteristically human tasks.

5 minutes ago 1 0 1 0
Preview
Trump says Anthropic is 'shaping up,' open to deal with Pentagon U.S. President Donald Trump said on Tuesday Anthropic was "shaping up" in the eyes of his administration, opening the door ​for the AI company to reverse its blacklisting at the Pentagon.

www.reuters.com/legal/govern...

31 minutes ago 0 0 0 0

(this day is never happy)

1 hour ago 0 0 0 0

Happy limited DoD budget release day to all who celebrate

1 hour ago 0 0 1 0

I cannot believe that you think my claim about the end of history and the impending obsolescence of the human race is controversial. Is your head that deep in the sand?

1 hour ago 3 1 0 0

Really sorry about this one

2 hours ago 2 0 0 0

Uptime? Nothing dawg what's up with you?

2 hours ago 5 0 1 0

Look, when YOU get replaced by AI, it's a skill issue.

When *I* get replaced by AI, it's a humbling reminder of the pace of technology development.

2 hours ago 5 0 1 0

I'm not savvy enough to figure out how to phrase this properly, but I do think the engineering vs. explanation mindset divide is a big part of this. Sort of a difficulty with not viewing every LLM performance in functionalist terms...

2 hours ago 2 0 0 0
Advertisement

Oh yeah, for sure. Which I don't think is necessarily a bad thing, but leads to the kinds of "total factor productivity cope" lines lol.

Do you think this time could be different? Or that the gains (whatever they might be) just don't show up in TFP?

3 hours ago 1 0 0 0

I think I see what you're getting at, but what do you mean by "first" tech revolution?

3 hours ago 1 0 1 0

This is how I feel about the famous 4chan-esque story GPT-3.5 wrote about a guy digging a bottomless pit.

4 hours ago 0 0 0 0

I'm referring to his recent comments on job displacement, which he's been making for a while now. And also his doomerism. I think him and Bengio just really enjoy telling people the end is nigh.

14 hours ago 1 0 1 0

You thought this was gonna be a joke. Shame.

16 hours ago 1 0 0 0
Preview
Humboldt: 'On Language' Wilhelm von Humboldt's classic study of human language was first published in 1836, as a general introduction to his three-volume treatise on the Kawi language of Java. It is the final statement of hi...

Yeah, I'm a Humboldt-head.

Humboldt: www.google.com/books/editio...

16 hours ago 2 0 1 0

I view him kind of like Dawkins. Great scientist, great in his area, but anything outside of his area and is self-indulgent/likes the sound of his own voice.

16 hours ago 0 0 1 0
Advertisement

LLMs keep needing more and more and more, which turns out to be extremely effective for certain things. But the data-reliance means they aren't taking these extra steps.

16 hours ago 0 0 0 0

I think it's very different, in a few ways, but the most relevant here I'd say is humans can find principled ways of generating ideas, building on them, etc. Not in the sense that they don't generate ideas in a scattershot way, but that they don't remain within the original 'knowledge enclosure.'

16 hours ago 0 0 1 0

On the most basic level possible, (and I'm not talking about coding and verification here), I view LLMs' responses to open-ended problems in this way. Sometimes they, too, cook - and training has refined their ability to cook - but we do the selection (and that seems to matter).

17 hours ago 1 0 0 0

H/t to @desiderratum.bsky.social for helping me put this together.

Sometimes @horsedisc.bsky.social is just cookin. We've all seen it. But we know horse does not know that it's cooking, and has no principled means of cooking, because half the time it's nonsense. We basically select from responses.

17 hours ago 1 0 2 0

So true, as always.

17 hours ago 1 0 1 0

...less well*

17 hours ago 1 0 0 0

This is great. It's like a well less adjusted @horsedisc.bsky.social.

17 hours ago 2 0 2 0

Get two free bets on Kalshi if you meet your nicotine goals this week!

17 hours ago 2 0 1 0

Most, if not all, of this by the way is consistent what pretty much normal understandings of deep neural networks before LLMs made everyone go insane. I've said this before (and many others), but the scalability of transformers did not solve the problems in creating a "general" intelligence.

17 hours ago 9 0 0 0
Advertisement

I think the ability to interpolate is remarkable at scale. But we were all waiting on this to tip into robust abstraction and generalization beyond training datasets, and instead what happened was they expanded the training datasets, refined them, and scaffolded the models effectively.

17 hours ago 7 0 1 0

I guess the implicit thought here - on the flip side - is that I've never (or so rarely as to be flukes) personally seen a qualitatively/conceptually interesting work produced by an LLM, nor new and interesting directions for work, etc. The progress just hasn't yielded this.

17 hours ago 9 0 4 0

(In a closed domain, the challenge of inferring intent is artificially restricted. I think there's a sophisticated ability, but also unfathomable amounts of human generated and synthetic data plus a scaffolding that constrains outputs.)

18 hours ago 5 0 0 0

The trick with Claude Code-esque systems is to take the expressiveness of a frontier base model, let it shotgun outputs in response to a prompt, but run the bullets through verifiers before returning an answer to the user, which only works so smoothly in closed domains (crude, but you get my point).

18 hours ago 6 0 2 0