I swear Gemini is trolling me
Posts by Nathan Whitmore
So why haven't we seen it yet? I think the aggregate brain hours devoted to human enhancement is pretty tiny currently (compared to AI) and the people doing this work are often not the ones using the work
(This is arguably good; it's because most researchers focus on things like dementia and stoke)
Improving healthspan might be the first place we see this (there are a number of longevity researchers personally using their research to stay healthy longer, which makes them more productive) but I'd guess we eventually see it in neurotech too--but we haven't reached the knee of the curve yet
An interesting thing to think about is that recursive self improvement probably applies to humans too: it's pretty reasonable to think that we could make a researcher 15% more productive, enabling them to discover something that makes people 50% more productive, etc...
True though I suspect they think of that as a solvable problem, the same way they think of bone structure/height as solvable.
People seem to get hung up on the idea that they're incorrect about what women are looking for in a partner. Which is true but pretty explicitly *not the point*
www.newyorker.com/culture/crit...
I actually think the reasons looksmaxxers do what they do are pretty straightforward. It's about developing a form of social capital that is intrinsic and can't be taken away
The children yearn for the mines.
I think the current weird state of the world makes more sense when you remember that we are quite literally living in the post apocalypse.
It's been like 4 years since people were suddenly dropping dead on the street, of course things are weird.
I think it's worth thinking about the benefits and costs of a creative world where everyone is a director, but as far as I've seen this is rarely talked about.
A really common argument is that people using Suno etc are not being creative or furthering creative culture. I find this a little silly; creativity is about bringing some new vision into the world and whether you use words or strings to do that is immaterial.
I think one thing that's missing from the AI art discourse is the acknowledgement that some creative pursuits have always been about prompting and tuning. Like film/stage directors, screenwriters, composers, fashion designers...
Projections like this make me think we have actually reached a technological singularity (as Kurzweil defines it, a point at which the accelerating pace of change that makes predicting the future essentially impossible)
I feel like most people don't realize that if you have a Bluetooth or WiFi enabled device on you (earbuds, smartwatch, fitness thing) it is more or less constantly broadcasting a locator beacon.
www.kold.com/2026/02/17/f...
It's more that things like brain and behavioral complexity are not necessarilly very good indicators of consciousness. There isn't a threshold where we can say "below this level you are definitely not conscious" in a way that applies to llma
So one wants AI to be sentient *less* than these CEOs
Ever since the Blake Lemoine incident every AI company has been empahtic that their models are *not* conscious, probably because if the models were conscious then their business model would just be slavery.
We're also much more complex than a toddler but I wouldn't say that means the toddler lacks consciousness.
Complexity/intelligence and sentience seem to be too separate things and we don't have good ways to measure the latter
This is basically the made out of meat argument though. There are differences sure, but no.one has really.given a good reason for why embodiment is *necessary* for consciousness. (And it's conspicuously absent from most of the leading models)
The assumption that consciousness could occur in systems with simpler/different brain organization than ours
Technically behaviorism is a little different--its a branch of psych/philosophy that basically says "we are bad at mechanistic interpretability so the best way to understand a mind is through how it behaves". But behaviorism doesn't make any claims about consciousness.
It's the same reason we assume some animals and young children have conscious experience, even though their brains are organized much differently from ours
There's also a good case to be made that the human brain contains a lot of reinforcement learning and CNN-like motifs doing various things
Yep. That explains a lot about why priming and implicit learning work.
Of course this argument is weaker for LLMs than for other humans, because they are *less* similar. But it implies that the closer to human behavior something gets, the more convinced we should be that it's conscious...and LLMs are the most humanlike entities we've ever seen
One thing is that LLMs can simulate both individual people and populations of people pretty well (given enough data) They sometimes even have the same cognitive biases.
The argument from analogical inference says that the best explanation for this is that they have minds similar to ours.
I think the only thing that we can say confidently is that they might be conscious, similar to how we don't really know whether or not an octopus is conscious.
I think Turings point still stands though. Anyone claiming that LLMs can't be conscious because of something about their inner workings is wildly overstating how much we know about the mechanisms of consciousness.
I think it's also a good argument against the argument from incredulity. A lot of humans will say "OBVIOUSLY a tranfomer model can't be conscious, it has no xxx" and Bisson points out aliens could say the same thing about us.