Look, for my money the absolute game changer technologies right now are batteries and biosciences, not statistically modeling a mid conversation, but you do you.
Posts by Luke VanderHart
I regret to inform you that this is true. My local brewpub has a new "fish and chips" dish... and the chips are plantains.
Fix your hearts or die.
I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field.
The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated.
This is a must-read series of articles, and I think Kyle is very much correct.
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
Yeah what’s funny is there are actually a thousand years of “just war theory” and Trump is in the wrong according to all of them.
Seeing a fascist get democratically defeated after 16 years (despite doing their best to consolidate power) is immensely hopeful.
All is not lost. All is never lost.
Well one bit of evidence is that I can argue a LLM into taking virtually any position on any topic, the only exceptions being topics on which it has received extensive RLHF "alignment" training (and even those can be jailbroken.)
That is very much not the case with humans.
It's really important to keep in mind that for any output, a LLM would just as happily say exactly the opposite given different inputs.
To me, this is pretty strong evidence that while they are capable language machines, they are not "intelligent."
So that's good (because sycophancy is cognitively hazardous) but it doesn't ultimately solve the problem.
My ideal "thought partner" agrees or disagrees because it has opinions and ideas of its own, not varying levels of RHLF for different styles of response.
I actually get the most leverage from telling it to rebuff my position.
A view of "how the average person on the internet would disagree with this" is quite helpful actually.
It's not blatant -- in fact, it's clearly been trained to "point out issues" especially on the first response. As an exchange goes on, it ultimately always conforms itself to the user.
This particular example was Claude 4.5.
But once you see the soft sycophancy (and Claude is still WAY better than GPT) it's hard to un-see, and it's present in 4.6 as well.
The only reason I was able to catch this was because this was a technical problem that ultimately conflicted with reality -- I really worry about people using them for more subjective, non-falsifiable thought processes.
I do still worry about using them for dialogue, given the level of sycophancy.
I have personal experience in following very incorrect paths while exploring problems with them & being extremely confused until I backtracked and found where they'd cheerfully affirmed a mistake early on.
BREAKING: Following the American threat of an “Avignon Papacy,” Robert Kennedy has begun a Diet of Worms
🚨New preprint and our results are rather concerning..
We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.
And these effects emerge after just 10–15 minutes of AI use!
1/
“Most large companies are spending more time strategizing against their employees than against their competitors.” 🖐️🎤
The ATmosphere, however, is growing.
Oh sure, but some people are sensitive enough that anything less than universal affirmation feels like oppression.
Even if there were only a single person saying that AI tools aren't appropriate for human-centered writing, they'd still interpret that as personal criticism and argue back.
Frankly, people such as yourself, who go to lengths to argue for their legitimacy, seem to be doing so more out of a desire for affirmation or validation than any other reason. As if this "tool" was somehow implicated in the skill or quality of one's writing.
Which it is.
The fact that you're arguing this online though means that it's not, in fact, just a tool. Spellcheckers and thesauri never needed apologists to convince people they were the future. People just use them -- or not -- with no fuss.
I hope you are wrong while fearing you are right.
This, 100%. And I say this as someone who does use LLMs for some aspects of my technical work -- I'm not categorically opposed to the technology.
But if I -- a human -- want to communicate with other humans... Using a LLM just gets in the way, let alone how profoundly disrespectful it is.
Separately, this does have me thinking about the best way to bill in this new world. "hours in front of keyboard" was never perfect but makes even less sense now -- the real value I provide is knowing *what* to build, not the time spent building it (which can now vary quite wildly, depending.)
Ideally, I try to spend the downtime working on something else related to the same project (e.g, researching or planning a new feature) so there isn't ambiguity.