This is an experimental thread on neural controls systems, specifically LFPs/spiking/ephapses and saccades. I am writing this content as a WIP bc I am worried it will get lost because of career distractions/lack of support.
Let's being with the two major papers I think right are most pertinent 1/n
Posts by Ebben
Text Shot: The problem is that “AI agent” means different things depending on who you ask. ChatGPT’s agent is a browser-based tool. Claude Code operates on the command line. Microsoft Copilot “agents” are closer to OpenAI’s custom GPTs or Google Gemini Gems, essentially self-contained instructions rather than fully fledged semi-autonomous bots. So, alongside my definition, I’ve created a taxonomy of agentic AI that I think helps to clarify things.
A Taxonomy of Agentic AI leonfurze.com/2026/03/30/a-t… #AI #agents
Can you share more of the output? What are your impressions?
By more abstract do you mean more general or less grounded? Philosophy is less grounded. But most importantly it is also more restricted to traditional philosophical language constructs as you described.
Yes, I've come to a very similar conclusion, but as an outsider, a retired cognitive science person. I've been trying to articulate what you said, so congrats, well done. But it still isn't clear - what distinguishes naturalistic philosophy from theoretical science?
Mike Levin's lab has a lot of material about this, but i don't see him mentioned. Do check out his lab's work. I'm just taking a quick browse thru my feed, so maybe i missed it.
You said, "a property is intrinsic if an object can have it independently of all other existents"
Another argument: Properties are information bearing. Thus, dependent on existents. Without information objects and properties would be void, meaningless, non-interacting.
The thesis is self-contradictory on the face of it. "intrinsic properties" is a language construct (definitions), thus relative. Sorry, metaphysics is a religion, a person's imagination. Sooner or later you have to get empirical. Otherwise it's just mystical word play.
A couple papers you may want to look at:
The Architecture of Meaning: A Developmental-Naturalistic Account of Consciousness Without a Hard Problem
docs.google.com/document/d/1XV…
The Emancipation of the drone: A Dennettian thought experiment
www.tandfonline.com/doi/full/10.10…
I've already mentioned that "knowledge" is another bad philosophical concept because it confounds the abstract and embedded senses. Now i see that "intentionality" is also a terribly confused philosophical concept.
Yes, I've read about Nagel's criteria for ages. To me it is more evidence of some philosophers failing to understand that traditional natural language concepts are not a good basis for grounding out psychological concepts.
In a way, neurons are interpreters. Neuro-chemical processes too. Brain operations interpret and "coarse grains" things at various levels of abstraction. Philosophical and ordinary language sometimes reifies things at the wrong level of abstraction.
Then there is Mike Levin's research that puts another twist on bio information. There are several youtube vids, this is just one of them. Some vids show demos of lab experiments that are curiously amazing. Can't point to a particular thing:
www.youtube.com/watch?v=UUg1uY…
A couple more things to check out. Miller on new ideas in neural dynamics (information riding on electric waves) and Howard Pattee on a fundamental notion of information.
www.youtube.com/watch?v=ie58Uj…
www.researchgate.net/publication/25…
There were a couple three papers recently that would help. Here's the only one i can find at the moment.
buildcognitiveresonance.substack.com/p/a-challenge-…
Differences: bio vs electronic --> analog, continuous, phase relations dependencies, information carried in the time domain, redundant, multiplexed, reentrant, and a few other information processing tricks that biology has.
Conceptualizing interpreters gets recursive and complicated real fast. It's Interpreters all the way down. So it's easier to think of it in information processing terms. But people confuse that with familiar computers. Bio information processing is different from fixed binary flip-flop silicon.
Also, if your're not already, keep an eye on Mike Levin's work who is showing cognitive continuity all the way down to cellular components level, strange but plausible general principles.
You mentioned studying the structure of phenomenal experience. That's fair game. But we're close to understanding the biological connection to basic consciousness.
Btw, keep an eye on Earl K Miller's neuro lab. Some interesting new sort of paradigm going on there about brain waves.
I didn't say that well. The issue i have is that no matter what you say, they cling to the idea of "what it is like to be" that is the criteria. I find that nonsense. I know what it's like to be a bat to some degree since he is a fellow mammal and i have learned my empathic skills are fair.
Umm, sort of an observation, but more like an inference or feeling of an observation. Anyway, the problem is you can't disprove or corroborate it, or however that Quinian or Popper or whomever's criteria goes.
Just one note, sign means several things, a regular sign, or an observation that shows a "sign of" something, an "indication of" so not necessarily an object perceived, but a also a situation perceived, so it blends into the notion of "representation" or re-presentation.
Right, interpretation as a function is probably distributed throughout a complex process. Raw and/or encoded signals, and local memory context are inputs to interpretative functions.
The SEP article on intentionality is really complex. Most everything is deemed controversial. So, using the term to support that a claim that an argument is circular doesn't really do justice. And phenomenologists seem downright non-empirical, mystical. You can't convince them, lol.
Ok, an encoding of content is the vehicle, or the sign in Peirce's model. "Intentionality" seems to be a pretty awkward formulation of the general concept being explained. I thought the connection to Peirce is an insight, I wonder if that is also well known. Thanks for your help.
What is the standard representational view?
Couldn't find "content/vehicle" in the SEP article tho i thought i saw it on first reading. What's the connection to intentionality?
Peirce's semiotic triad can be mapped to intentionality.
1. state of the world (real/imagined)
2. sign or indication of the state of the world
3. interpreting process that infers state of world from sign.
Intentionality
1. something in the world
2. mental state about something
3. mental processing
Thanks, i ordered from the author, i'll wait to see if that works.
I'll look into Marletto's book, sounds interesting.
This comment is what triggered my interest. I'm not sure i'm parsing your comment right, but I think many sub processes are intentional, so there is a hierarchy of intentionality. So, it's not circular, but is hierarchically composed - intentional processes composed of intentional sub-processes.
Consider software like apple's creative suite. does the package of apps qualify as an intentional process, since it is designed around, and represents, aboutness - about spreadsheets, about video, about documents, etc.?