Advertisement · 728 × 90

Posts by lil homie gay ass

Preview
‘We were terrified they were going to kill us’: fishers who survived US boat strike speak out

An Ecuadorian fishing crew describe their ordeal as victims of Trump’s purported war on ‘narcoterrorists’

17 hours ago 149 92 3 13
EsoLang-Bench: Evaluating Genuine Reasoning in Large Language Models via Esoteric Programming Languages Large language models achieve near-ceiling performance on code generation benchmarks, yet these results increasingly reflect memorization rather than genuine reasoning. We introduce EsoLang-Bench, a b...

Quick, supplement the training configuration!
ui.adsabs.harvard.edu/abs/2026arXi...

10 hours ago 0 0 0 0

it's always in the future tense!

11 hours ago 1 0 0 0

And people do indeed also frequently "approximate" reasoning or produce "reason-shaped" texts. You're doing a lot of it here, but I don't doubt your humanity.

11 hours ago 0 0 0 0

The reasons a person fails at a task is different from the reason a language model would, unless you take the two to be abstractly the same, which seems more metaphysical to me.

11 hours ago 0 0 1 0

I don't know what metaphysics is, but everything I know about how these computer programs works and how frequently they fail leads me to believe that they are "reason approximators" and produce reason-shaped text sequences, not genuinely reasoning, anymore than a GOFAI program would be "reasoning".

11 hours ago 0 0 1 0

I feel like you've lost the plot here because what you're saying is completely in line with the contention that LLMs do not "reason" but extrude reason-shaped text that sometimes does and sometimes doesn't actually approximate reasoning (which is also an empirical reality you ignore).

11 hours ago 0 0 1 0

Oh okay didn't realize I was talking to a philosophical idealist here.

12 hours ago 0 0 0 0

Putting aside the bad jargon what you're describing here is literally linear algebra. Corey isn't saying LLMs are GOFAI, he's saying that it's still, at the end of the day, symbolic manipulation.
bsky.app/profile/nafn...

12 hours ago 0 0 1 0
Advertisement

Yeah I don't take thought experiments seriously when they require me to disregard basic aspects of physical reality or depend on non sequitur like "imagine a Being made out of math".

12 hours ago 0 0 1 0

1+1=2 is a symbolic manipulation, linear algebra is also a symbolic manipulation.

12 hours ago 1 0 1 0

"but their underlying mechanism was math, not cells"

lmfao ok

12 hours ago 0 0 1 0

Except that the LLM is literally a pile of statistics, that is what it is doing to predict a token sequence, even if you obscure the statistics with neural network jargon.

12 hours ago 0 1 0 0

I mean I know that paper is sheer BS but I'm curious about the peripheral neuron thing.

12 hours ago 0 0 0 0

Paging @kh0rish.bsky.social how much of this is BS

12 hours ago 0 0 2 0

I did lol. Like I said, I'm not good at math, but armed with the high level understanding, that the models produce one token by another, I'm not led to any different conclusion than the one I was already at.

12 hours ago 0 0 1 0

LLMs don't work by fuzzy logic and if you're lying about that I have to assume a skeptical position on the rest of your replies.

12 hours ago 0 0 2 0
Advertisement

You can find the answer in a textbook I'm sure. High-level, it's a big probability equation that chooses the likelihood of one token following another given a statistical distribution of tokens. And this is true whether or not the output is "correct" or "useful" according to a human purpose.

13 hours ago 0 0 2 0

These systems aren't that complicated, you shouldn't need a reminder from me. The termination token is probably something like "QED", regardless of correctness. And I'm not saying they're the same as GOFAI except in that they're computer programs (I am also not a proof writer, I am bad at math).

13 hours ago 2 0 1 0
Post image

It's just a different kind of proof-writer, one that uses next-token prediction and a training configuration filled with mathematical proofs instead of (or in addition to, as has been the case) GOFAI techniques. And one which is essentially also brute forcing around reasoning-shaped text objects.

13 hours ago 1 0 1 0

The last sentence is false because in reality that is exactly what is happening with proof-writing programs be they GOFAI or connectionist.

14 hours ago 2 0 1 0

I'm serious though, this whole "it'll take longer to learn 😔" thing is straight up helplessness. The software industry is selling you half-assedness as a commodity.

15 hours ago 4 0 0 0

Instead of having a deterministic program that does what you want 100% of the time, here's a probabilistic program that does what you want 80% of the time. This is the New Paradigm.

15 hours ago 5 0 2 0

In any case I don't know how anyone can look at Libya and say "mission accomplished" without any sense of shame. NATO's intervention destroyed Libya. The bare utilitarian calculus against the intervention is obvious.

1 day ago 0 0 0 0
Advertisement

Although the US/NATO did act as a proxy air force for the Kurdish militias until Turkey put the kibosh on that once the Kurds defeated ISIS.

1 day ago 0 0 1 0
Preview
Timber Sycamore - Wikipedia

NATO didn't act as a proxy air force for the kaleidoscope of jihadist militias that overran the country, but the US did provide arms to them, as did Turkey.
en.wikipedia.org/wiki/Timber_...

1 day ago 0 0 1 0

Speaking of "attacking protesters", you guys over there can't even say "free Palestine" without getting beaten and arrested. You think you're free?

1 day ago 0 0 0 1

Are you suggesting NATO countries didn't intervene in Syria? Because that's just a flat out lie. Syria is another great example of what happens when western nations meddle in the Middle East: they instigate humanitarian crises and influxes of refugees across their borders.

1 day ago 0 0 2 0

To be honest until the LLM vendors are transparent about what's in their training configs (which probably opens them to criminal and civil liability) and what their post-training stages look like and involve, all of their reports are essentially pseudo-science.

1 day ago 0 0 1 0

The entire story of LLM development is in the training configuration and RLHF stages. That should be the story, not this fake ass Faustian bullshit where the model escapes its creators. They designed it to do this, they probably hired experts to massage the model to do what they claimed.

1 day ago 0 0 1 0