We’re organizing the 5th DataCV Workshop @ #CVPR2026 .
If your work focuses on data, such as bias, robustness, distribution shifts, synthetic data, or dataset analysis, we’d love to see it.
Proceedings + DataCV Challenge.
Deadline: March 10, 2026 (AOE)
sites.google.com/view/datacv-...
Posts by Hazel Doughty
Beyond improved accuracy, our editing method enables richer refinements to new action variants and deeper splits of already fine-grained categories.
If a small number of examples is available, the zero-shot edit provides a strong initialization for low-shot refinement, without retraining the backbone.
The edit enables post-hoc category splitting and outperforms strong vision-language baselines, while preserving the rest of the label space.
Yes it can. Our approach derives a zero-shot edit to the classifier head by decomposing and reusing structure already encoded in the model.
No new video data required.
At first glance, splitting a category sounds like a data problem: collect more videos, retrain.
We ask whether that’s actually necessary.
Can a trained classifier be refined without retraining and without video data?
We introduce a new problem: category splitting.
Given a trained classifier, the goal is to replace one category with finer-grained subcategories, while preserving performance on all others.
How flexible is a trained video classifier after training?
Our new #ICLR2026 paper investigates whether a category can be split into finer ones without retraining and without any videos.
arxiv.org/abs/2602.16545
Excited about detailed visual reasoning and subtle distinctions in #ComputerVision?
Only 1 week left to apply 👇
🏹 Job alert: PhD Candidate in Fine-Grained Visual Understanding at @unileiden.bsky.social
📍 Leiden 🇳🇱
📅 Apply by Feb 20th
🔗 careers.universiteitleiden.nl/job/PhD-Candidate-in-Fin...
✨PhD vacancy alert✨ Joost Batenburg and I are looking for someone that wants to work on fine-grained visual understanding in #ComputerVision
Apply here before 20 Feb:
careers.universiteitleiden.nl/job/PhD-Cand...
Tomorrow, I’ll give a talk about future predictions in egocentric vision at the #CVPR2025 precognition workshop, in room 107A at 4pm.
I’ll retrace some history and show how precognition enables assistive downstream tasks and representation learning for procedural understanding.
Excited to be giving a keynote at the #CVPR2025 Workshop on Interactive Video Search and Exploration (IViSE) tomorrow. I'll be sharing our efforts working towards detailed video understanding.
📅 09:45 Thursday 12th June
📍 208 A
👉 sites.google.com/view/ivise2025
Have you heard about HD-EPIC?
Attending #CVPR2025
Multiple opportunities to know about the most highly-detailed video dataset with a digital twin, long-term object tracks, VQA,…
hd-epic.github.io
1. Find any of the 10 authors attending @cvprconference.bsky.social
– identified by this badge.
🧵
Do you want to prove your Video-Language Model understands fine-grained, long-video, 3D world or anticipates interactions?
Be the 🥇st to win HD-EPIC VQA challenge
hd-epic.github.io/index#vqa-be...
DL 19 May
Winners announced @cvprconference.bsky.social #EgoVis workshop
Object masks &tracks for HD-EPIC have been released.. This completes our highly-detailed annotations.
Also, HD-EPIC VQA challenge is open [Leaderboard closes 19 May]... can you be 1st winner?
codalab.lisn.upsaclay.fr/competitions...
Btw, HD-EPIC was accepted @cvprconference.bsky.social #CVPR2025
The HD-EPIC VQA challenge for CVPR 2025 is now live: codalab.lisn.upsaclay.fr/competitions...
See how your model stacks up against Gemini and LLaVA Video on a wide range of video understanding tasks.
#CVPR2025 PRO TIP: To get a discount on your registration, join the Computer Vision Foundation (CVF). It’s FREE and makes @wjscheirer smile 😉
CVF: thecvf.com
HD-EPIC - hd-epic.github.io
Egocentric videos 👩🍳 with very rich annotations: the perfect testbed for many egocentric vision tasks 👌
This was a monumental effort from a large team across Bristol, Leiden Singapore and Bath.
The VQA benchmark only scratches the surfaces of what is possible to evaluate with this detail of annotations.
Check out the website if you want to know more: hd-epic.github.io
VQA Benchmark
Our benchmark tests understanding in recipes, ingredients, nutrition, fine-grained actions, 3D perception, object movement and gaze. Current models have a long way to go with a best performance of 38% vs. 90% human baseline.
Scene & Object Movements
We reconstruct participants kitchens and annotate every time an object is moved.
Fine-grained Actions
Every action has a dense description not only describing what happens in detail, but also how and why it happens.
As well as annotating temporal segments corresponding to each step we also annotate all the preparation needed to complete each step.
Recipe & Nutrition
We collect details of all the recipes participants chose to perform over 3 days in their own kitchen. Alongside ingredient weights and nutrition.
📢 Today we're releasing a new highly detailed dataset for video understanding: HD-EPIC
arxiv.org/abs/2502.04144
hd-epic.github.io
What makes the dataset unique is the vast detail contained in the annotations with 263 annotations per minute over 41 hours of video.
🛑📢
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
We propose a simple baseline using phrase-level negatives and visual prompting to balance coarse- and fine-grained performance. This can easily combined with existing approaches. However, there is much potential for future work.
Incorporating fine-grained negatives into training does improve fine-grained performance, however it comes at the cost of coarse-grained performance.