An Ecuadorian fishing crew describe their ordeal as victims of Trump’s purported war on ‘narcoterrorists’
Posts by lil homie gay ass
it's always in the future tense!
And people do indeed also frequently "approximate" reasoning or produce "reason-shaped" texts. You're doing a lot of it here, but I don't doubt your humanity.
The reasons a person fails at a task is different from the reason a language model would, unless you take the two to be abstractly the same, which seems more metaphysical to me.
I don't know what metaphysics is, but everything I know about how these computer programs works and how frequently they fail leads me to believe that they are "reason approximators" and produce reason-shaped text sequences, not genuinely reasoning, anymore than a GOFAI program would be "reasoning".
I feel like you've lost the plot here because what you're saying is completely in line with the contention that LLMs do not "reason" but extrude reason-shaped text that sometimes does and sometimes doesn't actually approximate reasoning (which is also an empirical reality you ignore).
Oh okay didn't realize I was talking to a philosophical idealist here.
Putting aside the bad jargon what you're describing here is literally linear algebra. Corey isn't saying LLMs are GOFAI, he's saying that it's still, at the end of the day, symbolic manipulation.
bsky.app/profile/nafn...
Yeah I don't take thought experiments seriously when they require me to disregard basic aspects of physical reality or depend on non sequitur like "imagine a Being made out of math".
1+1=2 is a symbolic manipulation, linear algebra is also a symbolic manipulation.
"but their underlying mechanism was math, not cells"
lmfao ok
Except that the LLM is literally a pile of statistics, that is what it is doing to predict a token sequence, even if you obscure the statistics with neural network jargon.
I mean I know that paper is sheer BS but I'm curious about the peripheral neuron thing.
Paging @kh0rish.bsky.social how much of this is BS
I did lol. Like I said, I'm not good at math, but armed with the high level understanding, that the models produce one token by another, I'm not led to any different conclusion than the one I was already at.
LLMs don't work by fuzzy logic and if you're lying about that I have to assume a skeptical position on the rest of your replies.
You can find the answer in a textbook I'm sure. High-level, it's a big probability equation that chooses the likelihood of one token following another given a statistical distribution of tokens. And this is true whether or not the output is "correct" or "useful" according to a human purpose.
These systems aren't that complicated, you shouldn't need a reminder from me. The termination token is probably something like "QED", regardless of correctness. And I'm not saying they're the same as GOFAI except in that they're computer programs (I am also not a proof writer, I am bad at math).
It's just a different kind of proof-writer, one that uses next-token prediction and a training configuration filled with mathematical proofs instead of (or in addition to, as has been the case) GOFAI techniques. And one which is essentially also brute forcing around reasoning-shaped text objects.
The last sentence is false because in reality that is exactly what is happening with proof-writing programs be they GOFAI or connectionist.
I'm serious though, this whole "it'll take longer to learn 😔" thing is straight up helplessness. The software industry is selling you half-assedness as a commodity.
Instead of having a deterministic program that does what you want 100% of the time, here's a probabilistic program that does what you want 80% of the time. This is the New Paradigm.
In any case I don't know how anyone can look at Libya and say "mission accomplished" without any sense of shame. NATO's intervention destroyed Libya. The bare utilitarian calculus against the intervention is obvious.
Although the US/NATO did act as a proxy air force for the Kurdish militias until Turkey put the kibosh on that once the Kurds defeated ISIS.
NATO didn't act as a proxy air force for the kaleidoscope of jihadist militias that overran the country, but the US did provide arms to them, as did Turkey.
en.wikipedia.org/wiki/Timber_...
Speaking of "attacking protesters", you guys over there can't even say "free Palestine" without getting beaten and arrested. You think you're free?
Are you suggesting NATO countries didn't intervene in Syria? Because that's just a flat out lie. Syria is another great example of what happens when western nations meddle in the Middle East: they instigate humanitarian crises and influxes of refugees across their borders.
To be honest until the LLM vendors are transparent about what's in their training configs (which probably opens them to criminal and civil liability) and what their post-training stages look like and involve, all of their reports are essentially pseudo-science.
The entire story of LLM development is in the training configuration and RLHF stages. That should be the story, not this fake ass Faustian bullshit where the model escapes its creators. They designed it to do this, they probably hired experts to massage the model to do what they claimed.