Maybe the question here is what it say about peer-review, even if in an admittedly low-standards venue. Sort of an unintentional Sokal Affair for AI research...
Posts by Milton
We are thrilled to be returning to GECCO for a second edition of the Evolving Self-Organisation Workshop.
We are now accepting paper submissions, so come join us!
Check out our website for more info and upcoming announcements evolving-self-organisation-workshop.github.io/gecco-2026/
What a week—projects, tutorials, discussion groups, art demo & even a complexity-alife quiz! Thank you to all the attendees who brought a fantastic energy to this year's ALICE workshop.
We can't wait to see you again next year, this time in Norway 🇳🇴
Thank you to @aicentre.dk and @itu.dk !
and 4) which phenomena we should model together given that we don’t want to do model then in isolation but neither can we model them all at the same time.
We agree on that. For me the disagreement is more about: 1) the relative importance of both types of tests, 2) how many conclusions you can draw from naturalistic stimuli given increased model complexity, 3) how particular phenomena manifest if at all in those stimuli 1/2
I would also agree with this. The question is are we that much closer to achieving this? The main issue seems to be that to scale up we need models that are increasingly hard to understand. Thus we end up with not one but two different systems we have to explain, no?
The main concern on my side is that naturalistic stimuli may introduce unknown unknowns into your experimental design which throw off your conclusion. Thus you need to be extra careful. But I agree that testing a lot of these phenomena on more realistic settings is important
I agree with this, yes. The artificial stimuli are diagnostic tools. Like edge cases for algorithms in computer science; we should use them to find out what is missing. I guess the disagreement (in so far as there is one) is on the relative importance between artificial and naturalistic stimuli.
Sure, but it may give you some indication of how things should work. For example classical experiments that illustrate Gestalt phenomena tell you something important about how visual stimuli are grouped. I don't think (and don't believe you think) these don't manifest for naturalistic stimuli.
Isn't that exactly the point? That when subjecting the system to what it doesn't normally see you are forcing the differences in processing to become more prominent?
The deadline is this Friday! You should apply to attend— it sounds like it’s going to be a blast!!! 💥
ALICE workshop guest speakers
This year's ALICE guest speakers 🧑🔬
- Angel Goñi-Moreno - @angelgm.bsky.social
- Alyssa Adams - @alyssa-m-adams.bsky.social
- Alexander Mordvintsev
- Eric Medvet - @ericmedvetts.bsky.social
- Kyrre Glette - @kyrre2000.bsky.social
- Stefano Nichele - @stenichele.bsky.social
- Susan Stepney
Our (in the making) paper combining morphogenetic fields and neural cellular cellular automata with @miltonllera.bsky.social, Eleni Nisioti & @risi.bsky.social got a little award at @alife2025.bsky.social #ALife2025 💮✨
Binz et al. (in press, Nature) developed an LLM called Centaur that better predicts human responses in 159 of 160 behavioural experiments compared to existing cognitive models. See: arxiv.org/abs/2410.20268
This was all @najarro.science tbh, so props to him
Excited about self-organizing systems and their likely synergies with modern approaches to AI? Then submit to the SONI special session at #ALife2025