New preprint: The Self Requires Learning. Self-consciousness requires continual learning + world-modeling. I introduce "bounded integration" to connect perspective, identity, and self-representation — and diagnose what current AI systems have and lack.
Full paper: mengyeren.com/research/202...
Posts by agentic learning ai lab
Latent trajectories from pretrained models are curved and zigzagged. We add a simple straightening objective that makes the latent transitions smooth and trajectories straightened.
Check out our latest research by @yingwww.bsky.social @yann-lecun.bsky.social @mengyer.bsky.social and colleagues!
Sharing my thoughts on Moltbook in a recent interview by The Independent.
Verifiers are increasingly being used today in RL to provide rewards. We did a systematic study on when it is the best to use LLMs to verify solutions. Check out the blog post below to learn more.
Babies learn to perceive the world and develop object and motion recognition in the early stages of life. Can a network bootstrap this understanding just by watching video? Check out the new blog post featuring our latest research on the Midway Network.
Excited to share our new research on local RL without backprop!
Lab gathering at #NeurIPS2025. Proud of this year’s work and excited about the ideas we’re building toward next!
Midway networks are cool: representation learning of motion and reconstruction jointly. I see similar motivation in V-JEPA 2 "AC", but I really like the execution here:
- hierarchical,
- backwards features with cross-attention.
arxiv.org/abs/2510.05558
C. Hoang, @mengyer.bsky.social
NYU
Check out our latest paper on representation learning from naturalistic videos →
New research by CDS MS student Amelia (Hui) Dai, PhD student Ryan Teehan, and Asst. Prof. Mengye Ren (@mengyer.bsky.social) shows that models’ accuracy on current events drops 20% over time—even when given the source articles. Presented at #NeurIPS2024.
nyudatascience.medium.com/language-mod...