Advertisement · 728 × 90

Posts by Nathan Whitmore

Post image Post image Post image Post image

I swear Gemini is trolling me

2 weeks ago 1 0 0 0

So why haven't we seen it yet? I think the aggregate brain hours devoted to human enhancement is pretty tiny currently (compared to AI) and the people doing this work are often not the ones using the work

(This is arguably good; it's because most researchers focus on things like dementia and stoke)

2 weeks ago 0 0 0 0

Improving healthspan might be the first place we see this (there are a number of longevity researchers personally using their research to stay healthy longer, which makes them more productive) but I'd guess we eventually see it in neurotech too--but we haven't reached the knee of the curve yet

2 weeks ago 0 0 0 0

An interesting thing to think about is that recursive self improvement probably applies to humans too: it's pretty reasonable to think that we could make a researcher 15% more productive, enabling them to discover something that makes people 50% more productive, etc...

2 weeks ago 1 0 2 0

True though I suspect they think of that as a solvable problem, the same way they think of bone structure/height as solvable.

2 weeks ago 1 1 1 0
Preview
The Captivating Derangement of the Looksmaxxing Movement In their warped and wrongheaded way, the omnipresent influencer Clavicular and his looksmaxxing compatriots are intent on demystifying the ideal of natural beauty.

People seem to get hung up on the idea that they're incorrect about what women are looking for in a partner. Which is true but pretty explicitly *not the point*

www.newyorker.com/culture/crit...

2 weeks ago 0 0 0 0

I actually think the reasons looksmaxxers do what they do are pretty straightforward. It's about developing a form of social capital that is intrinsic and can't be taken away

2 weeks ago 0 0 2 0
Advertisement
Post image

The children yearn for the mines.

1 month ago 0 0 0 0

I think the current weird state of the world makes more sense when you remember that we are quite literally living in the post apocalypse.

It's been like 4 years since people were suddenly dropping dead on the street, of course things are weird.

1 month ago 0 0 0 0
The name of our domain allows us to create beautiful and easy to remember URLs.

I think it's worth thinking about the benefits and costs of a creative world where everyone is a director, but as far as I've seen this is rarely talked about.

1 month ago 0 0 0 0

A really common argument is that people using Suno etc are not being creative or furthering creative culture. I find this a little silly; creativity is about bringing some new vision into the world and whether you use words or strings to do that is immaterial.

1 month ago 1 0 1 0

I think one thing that's missing from the AI art discourse is the acknowledgement that some creative pursuits have always been about prompting and tuning. Like film/stage directors, screenwriters, composers, fashion designers...

1 month ago 1 0 1 0
Post image

Projections like this make me think we have actually reached a technological singularity (as Kurzweil defines it, a point at which the accelerating pace of change that makes predicting the future essentially impossible)

1 month ago 0 0 0 0
Preview
FBI using ‘signal sniffer’ technology to search for Nancy Guthrie’s pacemaker The device is attached to low-flying aircraft to identify Bluetooth signals, like those emitted from pacemakers.

I feel like most people don't realize that if you have a Bluetooth or WiFi enabled device on you (earbuds, smartwatch, fitness thing) it is more or less constantly broadcasting a locator beacon.

www.kold.com/2026/02/17/f...

2 months ago 0 0 0 0

It's more that things like brain and behavioral complexity are not necessarilly very good indicators of consciousness. There isn't a threshold where we can say "below this level you are definitely not conscious" in a way that applies to llma

2 months ago 2 0 1 0
Advertisement

So one wants AI to be sentient *less* than these CEOs

2 months ago 1 0 0 0

Ever since the Blake Lemoine incident every AI company has been empahtic that their models are *not* conscious, probably because if the models were conscious then their business model would just be slavery.

2 months ago 1 0 1 0

We're also much more complex than a toddler but I wouldn't say that means the toddler lacks consciousness.

Complexity/intelligence and sentience seem to be too separate things and we don't have good ways to measure the latter

2 months ago 2 0 3 0

This is basically the made out of meat argument though. There are differences sure, but no.one has really.given a good reason for why embodiment is *necessary* for consciousness. (And it's conspicuously absent from most of the leading models)

2 months ago 0 0 1 0

The assumption that consciousness could occur in systems with simpler/different brain organization than ours

2 months ago 0 0 1 0

Technically behaviorism is a little different--its a branch of psych/philosophy that basically says "we are bad at mechanistic interpretability so the best way to understand a mind is through how it behaves". But behaviorism doesn't make any claims about consciousness.

2 months ago 1 0 1 0

It's the same reason we assume some animals and young children have conscious experience, even though their brains are organized much differently from ours

2 months ago 0 0 1 0
Advertisement

There's also a good case to be made that the human brain contains a lot of reinforcement learning and CNN-like motifs doing various things

2 months ago 0 0 1 0

Yep. That explains a lot about why priming and implicit learning work.

2 months ago 0 0 2 0

Of course this argument is weaker for LLMs than for other humans, because they are *less* similar. But it implies that the closer to human behavior something gets, the more convinced we should be that it's conscious...and LLMs are the most humanlike entities we've ever seen

2 months ago 0 0 0 0
Other Minds (Stanford Encyclopedia of Philosophy/Fall 2009 Edition)

One thing is that LLMs can simulate both individual people and populations of people pretty well (given enough data) They sometimes even have the same cognitive biases.

The argument from analogical inference says that the best explanation for this is that they have minds similar to ours.

2 months ago 2 0 2 0

I think the only thing that we can say confidently is that they might be conscious, similar to how we don't really know whether or not an octopus is conscious.

2 months ago 1 0 1 0

I think Turings point still stands though. Anyone claiming that LLMs can't be conscious because of something about their inner workings is wildly overstating how much we know about the mechanisms of consciousness.

2 months ago 0 0 1 0

I think it's also a good argument against the argument from incredulity. A lot of humans will say "OBVIOUSLY a tranfomer model can't be conscious, it has no xxx" and Bisson points out aliens could say the same thing about us.

2 months ago 1 0 2 0
Advertisement
They're Made out of Meat

Reminds me of this www.mit.edu/people/dpoli...

2 months ago 6 0 2 0