My comment on Fillipo Torresan & @manuelbaltieri.bsky.social's "Disentangled representations for causal cognition" in Physics of Life Reviews:
www.sciencedirect.com/science/arti...
I argue that there is little meaningful analogy between learning from "pixels" vs "experience," but I praise
Posts by Manuel Baltieri
Shocking
Two hypercube-shaped category-theoretic diagrams, each covered with an unreadable mess of labels.
Igor, you legend. Don't stop being you.
There are ten more of these unreadable hypercube diagrams on the following pages....
Souce: https://arxiv.org/abs/2505.00682
My experience applying for retractions at Elsevier.
I've looked at paper mills since 2019 and drafted a preprint on a paper mill from an international publisher in 2021. I started contacting journals or research integrity teams to raise concerns about papers. Publishers react differently. 1/n
Great talk by @manuelbaltieri.bsky.social!
"We discuss the problem of running today’s software decades, centuries, or even millennia into the future" tinlizzie.org/VPRIPapers/t...
Preprint time:
“AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability”
arxiv.org/abs/2504.04608
Exploring the fundamental limits that shape the design space of world modelling for agent sandboxing and interpretability
GOL in GOL in HOL: Verified circuits in Conway's game of life. ~ Magnus O. Myreen, Mario Carneiro. arxiv.org/abs/2504.00263 #ITP #HOL4
Great to see a colleague speaking up, sad to think about the state of affairs.
I don't want to delete anything. I simply agree with Barbieri's distinction and claim that for a successful syntactic relationship, there is no need for anticipation or computation.
On that level, the cell is a simple reliable #state machine (transducer) with no place for interpretation of meaning.
📌
One of the most controversial corollaries of relational biology is the impossibility of simulating life. But what if I tell you that this claim is simply the result of misinterpreting Robert Rosen's ideas?
#complexitycat 🐈⬛👇🧵1/3
amahury.github.io/posts/trilog...
Re the Tononi paper: Both Tononi’s IIT (phi) and Friston’s FEP start from fundamental, axiomatic, and debatable assumptions. These assumptions are generally made without any humility. This logic allows them to make exceptionally broad claims. Which contributes to my unease about them.
Directed wiring diagrams for Mealy machines!
📌
Only just learning about this now -- I guess for a while people have predicted that the AI doomer rationalist crowd would go violent, so its not surprising in some sense. Still though, odd times!
www.theguardian.com/global/ng-in...
Secondly, we discuss how this form of Bayesian filtering is quite simplistic, 1) not making full use of Bayesian updates by ignoring observations from the environment/plant, and 2) assuming that beliefs of equicredible states of the environment are disjoint (they form a partition).
16/16
Importantly, this makes use of the fact that we have a Markov category, Rel^+, of possibilistic Markov kernels that can be used to specify beliefs as (sub)sets without assigning them probabilities, but that works very much like other “nice” Markov categories.
15/
We then show how this corresponds to a Bayesian filtering interpretation for a reasoner: how a controller modelling its environment can be understood as performing Bayesian filtering on its environment.
14/
Firstly, we show that the definition of model between two autonomous system can be “reversed” to build a “possibilistic” version of the internal model principle.
13/
After a reasonably self contained overview of string diagrams for Markov categories, and some definitions including Bayesian inference/filtering, their parametrised and conjugate prior versions, we dive into the main result, showing mainly two things.
12/
In the second part of the paper, we use results from a recent line of work (link.springer.com/chapter/10.1...) started by some of my collaborators on how to interpret a physical system as performing Bayesian inference, or filtering, using the language of Markov categories.
11/
Our focus here is mostly technical and has to do almost entirely with control theory, but considering where the conversation started on the other platform, I hope that this will have an impact also in the cognitive and life sciences.
10/
This is often taken to be 1) a better formalisation of Conant&Ashby’s good regulator “theorem”, and 2) the reason why talking about “internal models” is necessary in cognitive science, AI/ML/RL, biology and neuroscience.
/9
The internal model principle is arguably one of the most influential outputs of control theory, claiming, at its core, that if a controller regulates a plant against disturbances from the environment, it does so by implementing a model of the environment.
8/
We define “models” for non-autonomous (fully observable) systems, generalising the original definition for autonomous systems (but focus on the latter). We think of this as generalising aspects of lumpability, state aggregation, coarse grainings, dynamical consistency, etc.
7/
We review the original work by Wonham and collaborators, and unpack some of its implicit assumptions, finding that at least one of them requires more attention (we also have a result that doesn’t require it, and may end up in a revised version or a future work).
6/
In the first part, we review and reformulate the “internal model principle” from control theory (at least, one of its versions) in a more modern language heavily inspired by categorical systems theory (www.davidjaz.com/Papers/Dynam..., github.com/mattecapu/ca...).
5/
In this work, we focus on two specific definitions of models, and show their connections. One is inspired by work in control theory, and one comes from Bayesian inference/filtering for cognitive science, AI and ALife, and is formalised with Markov categories.
4/