We are releasing the benchmark (TAPVid360-10k), code, and models. Come say hi at Poster Session 5 (Fri 11:00-14:00)!
๐ Paper: arxiv.org/pdf/2511.21946 ๐ป Code: github.com/finlay-hudso... ๐ค Dataset: huggingface.co/datasets/fhu... ๐ Project Page: finlay-hudson.github.io/tapvid360/
Posts by James Gardner
@fhudson.bsky.social and I are both at #NeurIPS! Come chat to us about TAPVid-360, how we adapted CoTracker3 to maintain object permanence even when things vanish completely from view, allocentric world models or other neat data gen tricks!
All 2D point tracking methods break as objects leave their FoV, and dynamic object 3D point tracks are intractable to obtain. In TAPVid-360, we exploit 360 videos for supervision, resampling them into narrow field-of-view videos while computing ground truth point directions across the full panorama.
This was so much fun to work on! I love finding new sources of supervision and 360 video data has so many applications.
This is an incredible collaboration between @willsmithvision.bsky.social, @willrowan.bsky.social and myself. The whole YorkVGL team is now switching all our research code over to Neuralatex and seeing massive productivity gains because of it!
You can check out the source code (and paper!) right now on arxiv: arxiv.org/abs/2503.24187
There is so much work to do! We want this running on GPUs by summer, taping out custom Neuralatex silicon by autumn and starting the run of Neuralatex GPT by the end of the year!
We demonstrate how powerful our library is by training an MLP to 86% accuracy on the spiral dataset with only a moderate 48-hour document compile time!
And the Source code of the Method in the Source Code of the Paper (SCOMISCOP) metric is the proportion of the source code of a method that is contained within the source code of the paper.
We get state-of-the-art performance in both of our own brand new metrics!
The Written in Latex (WIL) metric is the proportion of the source code of a machine learning library written in LaTeX.
Now, as part of your latex document, you can specify the architecture of a neural network, generate or load training data and define hyperparameters and loss functions. When the document is compiled, the latex compiler will train the network, run experiments and create figures.
This is a fully functional scalar-based auto-grad library, similar to Andrej Karpathy's micrograd but in pure Latex!
Simply import our engine.tex and nn.tex files and start training models straight away!
Are you tired of context-switching between coding models in @pytorch.org and paper writing on @overleaf.com?
Well, Iโve got the fix for you, Neuralatex! An ML library written in pure Latex!
neuralatex.com
To appear in Sigbovik (subject to rigorous review process)
As we approach the one year anniversary of a T-PAMI submission still waiting for first reviews, I imagine "With Associate Editor" to mean they sit in a lotus position atop a Himalayan peak, our paper and the reviews in their hand as they meditate (indefinitely) on what recommendation to make.