Advertisement · 728 × 90

Posts by agentic learning ai lab

Post image

New preprint: The Self Requires Learning. Self-consciousness requires continual learning + world-modeling. I introduce "bounded integration" to connect perspective, identity, and self-representation — and diagnose what current AI systems have and lack.
Full paper: mengyeren.com/research/202...

2 weeks ago 5 1 1 0

Latent trajectories from pretrained models are curved and zigzagged. We add a simple straightening objective that makes the latent transitions smooth and trajectories straightened.
Check out our latest research by @yingwww.bsky.social @yann-lecun.bsky.social @mengyer.bsky.social and colleagues!

1 month ago 3 1 0 0

Sharing my thoughts on Moltbook in a recent interview by The Independent.

2 months ago 7 1 0 0

Verifiers are increasingly being used today in RL to provide rewards. We did a systematic study on when it is the best to use LLMs to verify solutions. Check out the blog post below to learn more.

2 months ago 3 1 0 0

Babies learn to perceive the world and develop object and motion recognition in the early stages of life. Can a network bootstrap this understanding just by watching video? Check out the new blog post featuring our latest research on the Midway Network.

4 months ago 1 0 0 0

Excited to share our new research on local RL without backprop!

4 months ago 2 1 0 0
Post image Post image Post image

Lab gathering at #NeurIPS2025. Proud of this year’s work and excited about the ideas we’re building toward next!

4 months ago 4 1 0 0
Post image

Midway networks are cool: representation learning of motion and reconstruction jointly. I see similar motivation in V-JEPA 2 "AC", but I really like the execution here:
- hierarchical,
- backwards features with cross-attention.

arxiv.org/abs/2510.05558
C. Hoang, @mengyer.bsky.social
NYU

5 months ago 8 1 0 1

Check out our latest paper on representation learning from naturalistic videos →

1 year ago 2 0 0 0
Advertisement
Preview
Language Models’ Prediction of Current Events Degrades Over Time, Even With Latest Information Language models lose accuracy on predicting events over time, even with access to up-to-date information.

New research by CDS MS student Amelia (Hui) Dai, PhD student Ryan Teehan, and Asst. Prof. Mengye Ren (@mengyer.bsky.social) shows that models’ accuracy on current events drops 20% over time—even when given the source articles. Presented at #NeurIPS2024.

nyudatascience.medium.com/language-mod...

1 year ago 8 2 0 1