Looking forward to the pendulum swing there and back again 🙂
Always fabulous debating science with you!
Posts by Shiry Ginosar
Fans of the Platonic Representation Hypothesis argue that all representations converge.
However, we find that the current experimental evidence for the PRH is surprisingly fragile.
Instead, models may develop their own “Umwelten” (distinct, model-specific views of the world).
Great to see corroborating evidence to our motion-forecasting.github.io work from other groups!
Check out this concurrent great work from
@stefanabaumann.bsky.social, @jannik-w.bsky.social, @tommymarto.bsky.social, Mahdi M. Kalayeh, and Björn Ommer (@compvis.bsky.social)
riddle for kids that combines first/third person perspectives:
where are the firefighters on the map?
I see your kids are also subscribed to Einayim! The best magazine ever! :-)
We take a step toward behavior modeling in the wild: general non-rigid motion forecasting!
The coolest part: training on diverse animal behavior is enough to generalize to completely OOD motion.
Amazing work from @neerjathakkar.bsky.social, with @carldoersch.bsky.social's technical know-how!
🚨 Can real-world human experiences shape the future of generative AI and computer vision systems? Find out at Humans of Generative AI (HuG) workshop at #CVPR2026 🚨
Accepting 1-page extended abstracts for lightning talks and posters
🗓 Deadline: April 10, 2026
See: humansofgenerativeai.github.io
💫 I am recruiting exceptional PhD students & postdocs for my lab @tticconnect.bsky.social this year!
Application details: www.ttic.edu/studentappli...
TTIC is hiring tenure track faculty! Come join us in Chicago!
I am sadly not at ICCV but Greg Shakhnarovich (Vision) home.ttic.edu/~gregory/ and Matt Walter (Robotics) home.ttic.edu/~mwalter/ are around and would be happy to chat.
@alisongopnik.bsky.social's talk happening now!!
Ballroom B
@iccv.bsky.social
With a star-studded organizing team from Google DeepMind, @ox.ac.uk, @bristoluni.bsky.social, and @tticconnect.bsky.social
Joe Heyward, @nikparth1.bsky.social, @tylerzhu.bsky.social , Aravindh Mahendran, Joao Carreira, @dimadamen.bsky.social, Andrew Zisserman, Viorica Patraucean
Guest track 2: Our one and only KiVA Challenge!!! kiva-challenge.github.io
Veo can (almost...) do it!!! can you?? video-zero-shot.github.io
With @euniceyiu.bsky.social, Anisa Noor Majhi, Maan Qraitem, Kate Saenko, @alisongopnik.bsky.social
Guest track 1: Physics IQ physics-iq.github.io/workshop/phy...
With Robert Geirhos, Priyank Jaini, Luc Van Gool, and Saman Motamed
Join us TODAY for the 3rd Perception Test Challenge perception-test-challenge.github.io @iccv.bsky.social
Ballroom B, Full day
Amazing lineup of speakers: Ali Farhadi, @alisongopnik.bsky.social, Phlipp Krahenbul, @phillipisola.bsky.social
TODAY! Artificial Social Intelligence Workshop @iccv.bsky.social
Room 317B, Full day
Social reasoning, multimodality, and embodiment!
Speakers: Evonne Ng, @tianminshu.bsky.social @hyunwoo-kim.bsky.social, @diyiyang.bsky.social ang.bsky.social, @hokulabs.bsky.social, @michael-j-black.bsky.social
KiVA (Kid-inspired Visual Analogies) Challenge Test Phase is NOW LIVE (Sep 1–Oct 6)!
Can your model reason like a child? Can it beat adults?
🥇 $1,000
🥈 $500 for 2 runner-ups
Join/submit: t.co/zQwA1Nmohy
And join us at @iccv.bsky.social in Hawaii!! 🌴
I am giving a talk this morning at 10:40AM PST as part of the #ICML2025 Workshop on Assessing World Models.
Title: "What Do Vision and Vision-Language Models Really Know About the World?"
Come join us!
www.worldmodelworkshop.org
Join us for 3rd Perception Test Workshop &Challenge
@iccv.bsky.social #iccv2025
*NEW* this year:
- 3 unified tracks
- novel interpretability track
- guest tracks: KiVA and Physics-IQ
- 4 world-class speakers (see pic)
Up to 50K in prizes sponsored by Google DeepMind
🧵 for details [1/4]
Held @iccv.bsky.social in conjunction with GoogleDeepMind 3rd Perception Test Challenge: perception-test-challenge.github.io
Amazing speakers: Ali Farhadi, @alisongopnik.bsky.social @phillipisola.bsky.social, Philipp Krähenbühl.
Fantastically organized by @euniceyiu.bsky.social and co.!
🧠How “old” is your model?
Put it to the test with the KiVA Challenge: a new benchmark for abstract visual reasoning, grounded in real developmental data from children and adults.
🏆 Prizes:
🥇$1K to the top model
🥈🥉$500
📅 Deadline: 10/7/25
🔗 kiva-challenge.github.io
@iccv.bsky.social
When it comes to goal-directed work, people prioritize controllable variability (a.k.a. empowerment!).
But in undirected play, we shift toward embracing pure variability.
Check out our forthcoming Phil. Trans. A (2026) paper!
Check out our new paper at #ICLR2025, where we show that multi-task neural decoding is both possible and beneficial.
As well, the latents of a model trained only on neural activity capture information about brain regions and cell-types.
Step-by-step, we're gonna scale up folks!
🧠📈 🧪 #NeuroAI
🎧 Listen to the podcast!
Professor @alisongopnik.bsky.social and @newamerica.org CEO @slaughteram.bsky.social spoke with @alexis-madrigal.bsky.social about how rethinking our approach to caregiving and how we support care providers could lead to a better society.
🔗:
With Eunice Yiu, Maan Qraitem, Anisa Noor Majhi, Charlie Wong, Yutong Bai, @alisongopnik.bsky.social, and Kate Saenko. @iclr-conf.bsky.social
Think LMMs can reason like a 3-year-old?
Think again!
Our Kid-Inspired Visual Analogies benchmark reveals where young children still win: ey242.github.io/kiva.github....
Catch our #ICLR2025 poster today to see where models still fall short!
Thurs. April 24
3-5:30 pm
Halls 3 + 2B #312
Neuroscience is finally taking more and more baby steps towards running experiments at scale!
In his new book, published today, Nachum Ulanovsky calls on the field to embrace naturalistic conditions and move away from overcontrolled experiments.
#neuroskyence
www.thetransmitter.org/systems-neur...
Welcome to TTIC!! We are so excited to have you join us!!
We're very excited to introduce TAPNext: a model that sets a new state-of-art for Tracking Any Point in videos, by formulating the task as Next Token Prediction. For more, see: tap-next.github.io