π’ NeurIPS 2025 Spotlight π’
Can we embed motion into image representations?
FlowFeat embeds optical flow into pixel-level representations, which results in sharp feature grids, especially for dynamic objects.
Project website: tum-vision.github.io/flowfeat
With Anna Sonnweber and Daniel Cremers.
Posts by Nikita Araslanov
Can we match vision and language representations without any supervision or paired data?
Surprisingly, yes!Β
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
β¬οΈ 1/4
Back on Track: Bundle Adjustment for Dynamic Scene Reconstruction
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
π’ #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation π₯
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
π visinf.github.io/cups
βAfter carefully reading the other reviews and the author response, I keep my score βborderlineββ π£
#CVPR2025
π’ I am #hiring 2x #PhD candidates to work on Human-centric #3D #ComputerVision at the University of #Amsterdam!
The positions are funded by an #ERC #StartingGrant.
For details and for submitting your application please see:
werkenbij.uva.nl/en/vacancies...
π Deadline: Feb 16 π
My group is looking for motivated PhD students that want to work on the future of digital humans.
Within the ERC project 'LeMo: Learning Digital Humans in Motion' there are two open positions:
www.career.tu-darmstadt.de/HPv3.Jobs/TU...
www.career.tu-darmstadt.de/HPv3.Jobs/TU...