βI used AI to combine the data from two excel lists and then send emails to people who were on one list but not another. Saved me so much time.β
My brother in academia, you just fucking discovered mail merge. Welcome to early nineties computing.
Posts by Boxo McFoxo
Even when the chatbot does output Python to do the task instead of it being left entirely to the β¨οΈlatent spaceβ¨οΈ, it usually runs in a sandbox and the chatbot relays the output.
But they're not discovering how to use computers. They're discovering how to "use" a chatbot.
If they got the chatbot to write a little Python script for them to do the task that they could then keep and study, that would be a bit better. But that is not what happens.
If something can be fully represented symbolically, rather than conceptually, in a way that GOFAI can work with, I consider that GOFAI process to be automated reasoning, but it's not cognition and does not have the breadth of cognitive reasoning.
The Claudeswallop incident has really radicalised me tbh.
Dressing up LLM output as human words is an injury to us all. Vulgar. Sickening. It is like labelling piss as lemonade and gaslighting us into drinking the whole bottle. It is like throwing sand into our eyes. Also, it's just a dick move.
Putting a language-shaped object in front of us and lying to us that it came from a mind is also a consent violation.
I did not consent to cognitive corrosion.
Reasoning about concepts does.
They are like, 'We defeated our arch nemesis, The Symbolicist, with benchmarks, so we are right about everything and you must worship our machine demigod! Soon to be scaled into a machine god, watch this space!'
But by claiming to understand language and thought, they unlocked a new level.
Connectionists who are enamoured with LLMs hate linguists and cognitive scientists because they'd much rather be arguing with symbolicists about Searle's Chinese Room or whatever. They thought the final boss was Gary Marcus, but the final bosses are actually Emily M. Bender and Iris van Rooij.
No, it's just not reasoning. It's not different reasoning. The Reversal Curse paper authors can't work out what's going on and they flail around because they're too tied up in their prior assumption of concepts in the latent space.
Nah those are just different pattern matching operations. There are no concepts in the latent space, that's not what's encoded.
One is reasoning. The other is text.
If an agentic system has solved it through a genetic algorithm, the mechanics of which are to produce a solution, that is the capability of solving it. If an LLM has produced it through next token prediction, the capability is producing solution-shaped text that a human has then read as a solution.
You cannot reverse engineer a reasoning process from them. You can reverse engineer reasoned-shaped text from them.
You also have to be careful to not get caught up in the things you were just unable to focus on before if they aren't healthy or productive.
But your question was how it feels, so the most succinct answer I can give is that it feels like a gear shift.
They don't magically undo all the mental scars left by years of not being able to learn a healthy way to be productive. They repair the broken instrument (to a degree) but don't teach you how to use it.
Gil Duran tweet: TLDR: Fascism in response to Palantir's long fascists screed on X.
"Your Account is Suspended" Message on X
The CEO of Palantir posted a fascist manifesto on X.
I pointed out that it was fascistβwhich resulted in a permanent suspension from X (my second time!).
So, when you hear the tweeters complaining that BlueSky is intolerant, remember why many of us came here in the first place.
Because the LLM is not the whole of AlphaGeometry, just like it's not the whole of any agentic system that puts the LLM's outputs through any set of logical rules. In such a system the LLM is the part that predicts tokens.
The process is still human use of language, just as how the process of passing on genes is still reproduction.
We can't access those signals because we aren't deep learning algorithms. The signals are orthogonal to what we do with language.
That's not a concession. I have always considered that to be the case with, for instance, AlphaGeometry, with its rules-based symbolic engine. That is essentially a neuro-symbolic architecture with a very specific domain. It is automated reasoning, though not understanding. Humans wrote the rules.
Codon usage bias, which is not neutral but almost never has a high enough selection coefficient at the individual codon level to affect reproductive success; mutational signal patterns that tell us how a mutation occurred, not just the consequence of it; truly neutral substitutions.
Capability requires performing action to achieve an outcome. The utility to a capability is inherent. An agentic system could be said to have a capability if it's reliable enough to be useful, though the intentionality still comes from the human who set up the agent, not the agent itself.
Because humans are not only conceptualising. That's not the entirety of our cognition.
Why? Genomes are full of signals that natural selection doesn't have access to.
Why does there need to be a straightforward way, when the LLM's pattern matching is superhuman? It can be a very extremely not straightforward way that relies on syntactic signals that are too high-dimensional for humans to parse.
Understanding requires conceptualisation. If LLMs are conceptualising, why the Reversal Curse? If it is a general understanding ability, why the jagged intelligence aspect of only being able to produce proofs for some problems, not others, which doesn't correlate with the difficulty of the problem?
But it's not pattern matching the problem. It's pattern matching human language that describes problems, in a domain where humans describe those problems in a highly structured way.
They are capable only of providing a next token prediction. Sometimes humans successfully extract utility from those predictions. That does not mean that the utility that humans have been able to extract is the capability of the LLM.
They aren't random. In fact, in the latest generation of SOTA LLMs you can turn the temperature all the way down to 0 and it barely degrades the output. It is syntactic pattern matching to an unimaginably superhuman degree. It's not gaining human reasoning ability, it's doing something different.
If you use an LLM for your bioinformatics coding then you are still outsourcing the task, but to something that you can't trust. Someone who doesn't know how to evaluate code can't use LLM code for anything that can't be easily fully validated from outputs matching expectation.