Advertisement · 728 × 90

Posts by Matthew Larkum

But now there are now two kinds of “nothing”. With green light, the “feedback replay” doesn't need to do anything. If we simply turn the replay device off, it “can’t” do anything. According to theories that depend on causality (e.g. IIT), the two kinds of nothing are fundamentally different.

10 months ago 3 0 1 0

A computational functionalist must decide:
Does consciousness require dynamic flexibility and counterfactuals?
Or is a perfect replay, mechanical and unresponsive, still enough?

10 months ago 1 0 1 0

So we ask: is consciousness just the path the system did take, or does it require the paths it could have taken?

10 months ago 2 0 1 0

In Turing terms: for the same input, the same state transitions occur. But if you change the input (e.g. shine red light), things break. Some states become unreachable. The program is intact but functionally inert. It can’t see colours anymore. Except arguably green - or can it?

10 months ago 0 0 1 0

For congruent input (here, the original green light), no corrections are needed. The replay “does nothing”. Everything flows causally just as before. Same input drives the same neurons to have the same activity for the same reasons. If the original system was conscious, should the re-run be, too?

10 months ago 0 0 1 0
Post image

Back to the new thought experiment extension, where we add a twist: “feedback replay”. Like how patch clamping a cell works, the system now monitors the activity of neurons, only intervening if needed.

10 months ago 0 0 1 0

Could the head be feeling something? Is it still computation?

10 months ago 0 0 1 0
Post image

In the original thought exp, we imagined “forward replay”. Here, the transition function (the program) is ignored, which amounts to a “dancing head”. This feels like a degenerate computation (Unfolding argument? doi.org/10.1016/j.co...).

10 months ago 0 0 1 0
A standard Turing Machine cartoon showing the "green states" that the algorithm uses to compute green, and "red states" that are only necessary for seeing red. Additionally, a recording device recording 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m), for each step.

A standard Turing Machine cartoon showing the "green states" that the algorithm uses to compute green, and "red states" that are only necessary for seeing red. Additionally, a recording device recording 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m), for each step.

To analyze this, we model it with a Universal Turing Machine. Input: “green light.” The machine follows its transition rules and outputs “experience of green.” Each step we record 4 values, the current state, the state transition, what the head writes, and how the head moves (s, t, w, m).

10 months ago 1 0 1 0
Advertisement

So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?

10 months ago 0 0 1 0

Then we replay it back into the same neurons. The system behaves identically. No intervention needed. So: is the replayed system still conscious? If everything unfolds the same way, does the conscious experience remain?

10 months ago 0 0 1 0

We record the entire sequence of what happens when “seeing green”. Then we replay it back into the same simulated neurons. If computational functionalist is right, this drives the “right” brain activity for a 1st-person experience.

10 months ago 0 0 1 0

Now, imagine a person looking at a green light. If the computational functionalist is right, the correct brain simulation algorithm doesn't just process green, it experiences green. Here, we start by assuming some deterministic algorithm can simulate all crucial brain activity.

10 months ago 0 0 1 0
Preview
Does brain activity cause consciousness? A thought experiment The authors of this Essay examine whether action potentials cause consciousness in a three-step thought experiment that assumes technology is advanced enough to fully manipulate our brains.

This extends a thought experiment from our earlier paper: doi.org/10.1371/jour...
We (Albert Gidon and @jaanaru.bsky.social) asked: does brain activity cause consciousness, or is something essential lost when the brain's dynamics are bypassed?

10 months ago 1 0 1 0
Preview
Frontiers | Does neural computation feel like something? Artificial neural networks are becoming more advanced and human-like in detail and behavior. The notion that machines mimicking human brain computations migh...

Does neural computation feel like something? In our new paper, we explore a paradox: if you replay all the neural activity of a brain—every spike, every synapse—does it recreate conscious experience?
🧠 doi.org/10.3389/fnin...

10 months ago 16 7 4 1