Advertisement · 728 × 90

Posts by Mehdi S. M. Sajjadi

Preview
D4RT: Unified, Fast 4D Scene Reconstruction & Tracking Meet D4RT, a unified AI model for 4D scene reconstruction and tracking.

D4RT: Teaching AI to see the world in four dimensions
deepmind.google/blog/d4rt-te...

We just released a Google DeepMind blog post on our latest work, please check it out!

The project website & tech report can be found at d4rt-paper.github.io

2 months ago 11 1 0 0

🔥 Efficiently Reconstructing Dynamic Scenes One 🎯 D4RT at a Time
d4rt-paper.github.io

Building on the SRT architecture (srt-paper.github.io), D4RT unlocks a flexible interface for Dynamic 4D Reconstruction and Tracking.

It's truly been a privilege to work with this incredibly talented team.

4 months ago 2 0 0 0

Looking forward to it!

5 months ago 1 0 0 0
Scaling 4D Representations

Scaling 4D Representations

Scaling 4D Representations

Self-supervised learning from video does scale! In our latest work, we scaled masked auto-encoding models to 22B params, boosting performance on pose estimation, tracking & more.

Paper: arxiv.org/abs/2412.15212
Code & models: github.com/google-deepmind/representations4d

9 months ago 20 8 0 0
Video

We're very excited to introduce TAPNext: a model that sets a new state-of-art for Tracking Any Point in videos, by formulating the task as Next Token Prediction. For more, see: tap-next.github.io

1 year ago 24 9 1 0
Video vs. image diffusion representations

Video vs. image diffusion representations

Feature visualization for image and video diffusion

Feature visualization for image and video diffusion

Generative Video Diffusion: does a model trained with this objective learn better features compared to image generation?

We investigated this question and more in our latest work, please check it out!

*From Image to Video: An Empirical Study of Diffusion Representations*
arxiv.org/abs/2502.07001

1 year ago 6 2 0 0

Check out @tkipf.bsky.social's post on MooG, the latest in our line of research on self-supervised neural scene representations learned from raw pixels:

SRT: srt-paper.github.io
OSRT: osrt-paper.github.io
RUST: rust-paper.github.io
DyST: dyst-paper.github.io
MooG: moog-paper.github.io

1 year ago 13 3 0 0
Preview
Viorica Patraucean on LinkedIn: Super excited to share our recent work on designing more efficient video… Super excited to share our recent work on designing more efficient video models: TRecViT https://lnkd.in/ehh4gGbn alternates SSM blocks (LRUs) that integrate…

Authors:
Viorica Pătrăucean, Xu Owen He, Joseph Heyward, Chuhan Zhang, Mehdi S. M. Sajjadi, George-Cristian Muraru, Artem Zholus, Mahdi Karami, Ross Goroshin, Yutian Chen, Simon Osindero, João Carreira, Razvan Pascanu

Original post:
www.linkedin.com/posts/vioric...

1 year ago 8 1 0 0
TRecViT architecture

TRecViT architecture

TRecViT: A Recurrent Video Transformer
arxiv.org/abs/2412.14294

Causal, 3× fewer parameters, 12× less memory, 5× higher FLOPs than (non-causal) ViViT, matching / outperforming on Kinetics & SSv2 action recognition.

Code and checkpoints out soon.

1 year ago 25 7 1 0