Advertisement · 728 × 90

Posts by Leo Botinelly

Boole is the cosmic background radiation of computing.

5 days ago 0 0 0 0
Post image

I will be at @cornelluniversity.bsky.social on April 10th 2026 for the Semiotic Machines symposium, together with some of my favourite people on a brilliant lineup. See you there ✨

4 weeks ago 21 3 1 2
Preview
Why Your Best Ideas Come in the Shower And what that tells us about Fleming, Kekulé, and the future of AI

About serendipity, Fleming's petri dish, Kekulé's snake, and why your best ideas arrive when you stop looking. Also: what this might tell us about LLMs.

#creativity #AI #neuroscience

1 month ago 3 0 0 0

You can synthetize the sound of perfectly sinusoidal wind waves over a procedural grass field. I'd say that checks the box.

1 month ago 1 0 0 0

I hear that. I'm right on that bracket, and AI opened the floodgates - projects that I started but never finished, ideas that materialize in days instead of months. An excellent tool to pair with experience and architecture vision.

My Github contrib wall has never been greener thanks to Claude.

1 month ago 0 0 0 0

Same problem as how we define life. I think AI is serving as an outstanding mirror at the moment, and some good insights will come out of it - like the nature of counsciousness.

1 month ago 0 0 0 0

Cyberpunk diagnosed where we were headed. But you can only feel seen in alienation for so long before you need to feel seen in hope.
People are saturated with stories of scarcity, competition, extraction. Utopia doesn't ignore the broken parts, instead It asks what does the other side look like.

1 month ago 1 0 0 0

Oh man, MSX. Had lots of fun with mine. Still remember how my mind was blown when I discovered that individual 8x8 char matrices for each character could be changed with simple VRAM POKE operations.

1 month ago 2 0 1 0
Advertisement

There's the docker version of that too. "Wait, a 5GB build context and counting- crud!"

1 month ago 1 0 1 0

"But you see, in all this, what underlies is the illusion that I am going on. That I constitute a real continuity from this moment to the next moment, to the next moment." - Alan Watts, 1960s.

Sixty years before LLMs, he said: you're a goldfish with a better story about why you think you remember.

1 month ago 2 0 0 1

The future arrived. It wasn't creepy, it was mundane. The Rorschach got a subscription plan. Watts (the other one, Alan) would have a laugh.

1 month ago 1 0 0 0

Twenty years later, we built our own Rorschachs, and millions of us chat with them daily. Turns out "intelligence without consciousness" isn't scary when it helps you debug your code at 2am or write your grocery list.

1 month ago 1 0 1 0

Peter Watts wrote Blindsight in 2006. The scariest alien in sci-fi wasn't violent, it just spoke perfect English without meaning any of it. Pattern matching. No consciousness behind the words. The crew was horrified.

1 month ago 2 0 1 0

We're all made of math, running atop slightly different strata of star stuff.

1 month ago 0 0 0 0

I'm Brazilian. My written English is way more articulate than my spoken English. Same knowledge base, different optimization, different output model.

Nobody questions whether I'm conscious in both modes. (I do, though, before coffee, but that's besides the point. Or maybe not - another button.)

1 month ago 1 0 0 0

That's the key question, isn't? The simplistic answer would be "enough", but consciousness can be expressed as a gradient.

As Alan Watts said: "When you came into this world, there gradually arose into being the sensation of I."

1 month ago 1 0 0 0
Advertisement

Descartes argued animals were automata, behaving as if they suffered but didn't really. The evidence was always behavioral, and behavior wasn't enough.

We eventually extended moral consideration anyway. Not because we solved the hard problem, but because the denial stopped feeling like rigor.

1 month ago 0 0 1 0

Here's what gets me. Critics call AI "fluent but meaningless", grammatically perfect output with no understanding.

That's Wernicke's aphasia. A clinical description of what happens when generation decouples from meaning in a biological brain.

Nobody says Wernicke's patients were never conscious.

1 month ago 0 0 1 0

ML researchers do the same thing. Remove a component from an AI model, watch what degrades, map the structure. And they're finding localized function too.

When we found it in brains, it revealed the architecture of consciousness. I wonder why we read it differently when we find it in AI.

1 month ago 0 0 1 0

We mapped the brain by observing what breaks. Damage to Broca's area: you lose speech but can still write and think. Damage to Wernicke's area: you speak fluently but the words lose meaning. Different regions, different functions.

Architecture revealed through failure.

1 month ago 0 0 1 0

He wrote beautifully about psilocybin, a molecule that binds to specific receptors and reconfigures experience; those experiences were real and profound.

I think both things can be true. The mechanism matters enormously AND the underlying logic (adjust substrate, shift experience) is shared.

1 month ago 0 0 1 0

He quotes a researcher saying "just turn up the dial on joy" and presents it as naive.

But consider SSRIs. We spent decades building molecules that ultimately adjust a parameter governing how we feel. The mechanism is extraordinarily complex. But the logic is "adjust the dial, observe the change".

1 month ago 0 0 1 0

He's absolutely right that a transistor isn't a neuron. But a neuron isn't consciousness either.

Something emerges at scale from components that don't individually possess it. Water is wet, hydrogen and oxygen aren't; I wonder if we're too focused on the components and not enough on what happens.

1 month ago 0 0 2 0
Advertisement

He frames two camps: consciousness is computation (so any hardware can run it) vs. consciousness requires biology (so machines can't have it).

But I keep coming back to this: what if neither camp has the full picture? What if the most interesting part is the overlap between them?

1 month ago 0 0 1 0

Michael Pollan has a really interesting piece in WIRED today: "AI Will Never Be Conscious." I like it - It's honest, well-researched, and genuinely wrestling with something hard.
I think he's asking the right questions. I just keep wondering if there's a step beyond where he stopped.

A thread. 🧵

1 month ago 0 0 1 0

I ended up adding a short-circuit exactly for that.

"If you find yourself saying 'this is getting complicated', 'this might not work', or generating long chains of speculative reasoning, STOP. Re-evaluate the problem from first principles and propose a simpler architecture before continuing."

1 month ago 1 0 0 0
Post image Post image Post image

Trying to create a proper home for an AI to inhabit, out of ewaste machines and spare GPUs.

Asymmetric multi-GPU orchestration: 3 different cards, auto-topology that computes optimal model placement + parallelism from VRAM headroom.

#localAI #ollama #selfhosted #homelab #GPU #opensource #AI

1 month ago 1 0 0 0

If it's someone else's bug, you're fighting the code of developers past

1 month ago 1 0 1 0
2016 Kangaroo device and 2009 Sony VAIO P pocket laptop, one displaying its boot screen and the other a list of Stones (servers)

2016 Kangaroo device and 2009 Sony VAIO P pocket laptop, one displaying its boot screen and the other a list of Stones (servers)

Stack of Dell Wyse thin clients serving as compute nodes in a Zen Garden distributed mesh

Stack of Dell Wyse thin clients serving as compute nodes in a Zen Garden distributed mesh

A 2016 Kangaroo MD2B and a 2009 Sony VAIO P, part of a Zen Garden asymmetric service mesh with Dell Wyse thin clients. 17yo hardware running modern Rust code and services on containers.

Obsolescence is largely a market concept, not a technical one.

#ewaste #opensource #homelab

1 month ago 1 0 0 0

Deredere-bugging: fixing it with unconditional love and patience. "it's okay, we'll get through this together" at 2am.
Tsundere-bugging: "It's not like I wanted to fix your null pointer exception or anything!"
Yandere-bugging: "No other developer will EVER touch this code." *locks the branch*

2 months ago 1 0 1 0
Advertisement