Advertisement · 728 × 90

Posts by Milton

Maybe the question here is what it say about peer-review, even if in an admittedly low-standards venue. Sort of an unintentional Sokal Affair for AI research...

3 weeks ago 1 0 0 0
Video

We are thrilled to be returning to GECCO for a second edition of the Evolving Self-Organisation Workshop.

We are now accepting paper submissions, so come join us!

Check out our website for more info and upcoming announcements evolving-self-organisation-workshop.github.io/gecco-2026/

1 month ago 12 5 0 0
Post image Post image Post image Post image

What a week—projects, tutorials, discussion groups, art demo & even a complexity-alife quiz! Thank you to all the attendees who brought a fantastic energy to this year's ALICE workshop.

We can't wait to see you again next year, this time in Norway 🇳🇴

Thank you to @aicentre.dk and @itu.dk !

2 months ago 16 8 0 0

and 4) which phenomena we should model together given that we don’t want to do model then in isolation but neither can we model them all at the same time.

2 months ago 0 0 0 0

We agree on that. For me the disagreement is more about: 1) the relative importance of both types of tests, 2) how many conclusions you can draw from naturalistic stimuli given increased model complexity, 3) how particular phenomena manifest if at all in those stimuli 1/2

2 months ago 0 0 1 0

I would also agree with this. The question is are we that much closer to achieving this? The main issue seems to be that to scale up we need models that are increasingly hard to understand. Thus we end up with not one but two different systems we have to explain, no?

2 months ago 0 0 0 0

The main concern on my side is that naturalistic stimuli may introduce unknown unknowns into your experimental design which throw off your conclusion. Thus you need to be extra careful. But I agree that testing a lot of these phenomena on more realistic settings is important

2 months ago 2 0 1 0

I agree with this, yes. The artificial stimuli are diagnostic tools. Like edge cases for algorithms in computer science; we should use them to find out what is missing. I guess the disagreement (in so far as there is one) is on the relative importance between artificial and naturalistic stimuli.

2 months ago 2 0 1 0
Advertisement

Sure, but it may give you some indication of how things should work. For example classical experiments that illustrate Gestalt phenomena tell you something important about how visual stimuli are grouped. I don't think (and don't believe you think) these don't manifest for naturalistic stimuli.

2 months ago 2 0 1 0

Isn't that exactly the point? That when subjecting the system to what it doesn't normally see you are forcing the differences in processing to become more prominent?

2 months ago 3 0 1 0

The deadline is this Friday! You should apply to attend— it sounds like it’s going to be a blast!!! 💥

5 months ago 13 4 0 0
ALICE workshop guest speakers

ALICE workshop guest speakers

This year's ALICE guest speakers 🧑‍🔬

- Angel Goñi-Moreno - @angelgm.bsky.social
- Alyssa Adams - @alyssa-m-adams.bsky.social
- Alexander Mordvintsev
- Eric Medvet - @ericmedvetts.bsky.social
- Kyrre Glette - @kyrre2000.bsky.social
- Stefano Nichele - @stenichele.bsky.social
- Susan Stepney

6 months ago 18 15 0 1
Post image Post image

Our (in the making) paper combining morphogenetic fields and neural cellular cellular automata with @miltonllera.bsky.social, Eleni Nisioti & @risi.bsky.social got a little award at @alife2025.bsky.social #ALife2025 💮✨

6 months ago 18 6 1 0
Post image

Binz et al. (in press, Nature) developed an LLM called Centaur that better predicts human responses in 159 of 160 behavioural experiments compared to existing cognitive models. See: arxiv.org/abs/2410.20268

9 months ago 64 23 3 7

This was all @najarro.science tbh, so props to him

1 year ago 1 0 0 0

Excited about self-organizing systems and their likely synergies with modern approaches to AI? Then submit to the SONI special session at #ALife2025

1 year ago 5 1 1 0
Advertisement
Deep problems with neural network models of human vision | Behavioral and Brain Sciences | Cambridge Core Deep problems with neural network models of human vision - Volume 46

We wrote a perspective on this two years ago now…

www.cambridge.org/core/journal...

1 year ago 4 1 0 0