1/ As AI agents become increasingly capable, what must *inevitably* emerge inside them?
We prove selection theorems: strong task performance forces world models, belief-like memory and—under task mixtures—persistent variables resembling core primitives associated with emotion.
Posts by Tom Ringstrom
What drives behavior in living organisms? And how can we design artificial agents that learn interactively?
📢 To address these, the Sensorimotor AI Journal Club is launching the "RL Debate Series"👇
w/ @elisennesh.bsky.social, @noreward4u.bsky.social, @tommasosalvatori.bsky.social
🧵[1/5]
🧠🤖🧠📈
What's the quote?
You should always give in to these impulses, IMO.
Looks cool! Heads up, my collaborators and I derived the state-action Linearly Solvable MDP a while back. You might be interested arxiv.org/pdf/2007.02527
First time seeing a wild giraffe. Just chillin’ on the side of the road.
Stoffel lives at an animal rehabilitation center near Kruger National Park and is an expert escape artist. But he is 26 now (they only live an average of 8 years in the wild) so he spends most days snuggling with his girlfriend Hammie. BBC show: m.youtube.com/watch?v=c36U...
They say don’t meet your heroes, but I traveled to South Africa and met mine and it was worth it. Stoffel the Honey Badger became a major inspiration for my PhD thesis when my advisor showed our lab a BBC show on clever animals who can solve long horizon tasks, presumably for abstract reasons.
And I've always wondered how this works with his Constructor theory.
By the way, while Deutsch doesn't have a deeply rigorous decision theory to match his views, I did once hear him say (on a podcast I can't seem to find) that he regards value as equivalent to the space of possible transformations one can make, which is to a close approximation what empowerment is.
I've been thinking recently about Bostrom's notion of instrumental convergence, and what would entail instrumental divergence. There's an obvious sense in which infinitely large time horizons and infinitely small relevant probabilities play a role in washing out potential differences.
Deutsch's emphasis on universal explainers is a better (though incomplete) alternative and has nothing to do with emulation (he never talks about the normative part of why one should want to explain something)
Yeah, their emphasis on emulation is frustrating (for somewhat similar reasons to the recent Jaeger/Vervaeke paper). AI-by-learning being intractable is not interesting. It doesn't imply anything about the intractability of creating generally intelligent systems.
Related:
bsky.app/profile/nore...
Unfortunately for SlipFrosty, a theory of instrumental intelligence is inseparable from a theory of normative intelligence. Abolish the value function!
My favorite interview from the past year, of philosopher Pete Wolfendale. Recommended to anyone interested in AI, the relationship between value, aesthetics and ethics, or anyone who wants a reason to abandon "rationality as Bayes + Utility".
www.youtube.com/watch?v=0xMc...
IMO, A problem with RL is that, in sparse-reward problems, value functions don’t have a general decomposition over high-dimensional transition kernels so people are trying to learn neural-net approximations to difficult-to-generalize functions from a lot of experience.
Fun ep.
Exactly :)
@denizrudin.bsky.social Deniz, did your grandma and grandpa call you baby Rudin?
Love that Fog Lake song.
Stellar new work lead by the inimitable James Whittington in Neuron that develops a new theory unifying episodic and working memory and explains diverse hippocampal and prefrontal data: www.cell.com/neuron/fullt... w/Will Dorrell, @behrenstimb.bsky.social Mohamedy El Gaby
Starter packs are making it easier to keep constructing niches, so I made this one for people broadly interested in intrinsic motivation, with some focus on RL, neuroscience and cognitive science. Haven’t found many people on Bluesky that I wanted to add here, so help me out! go.bsky.app/TPNrnpE
I wish we could turn some of the starter-packs into a custom feed rather than following everyone.
@bsky.app Please consider this!
I wish we could turn some of the starter-packs into a custom feed rather than following everyone.
@saxelab.bsky.social
Which ones do you have?