A new approach to optical flow:
🔹 Global matching via transformers
🔹 Optimal transport for flow initialization
🔹 Confidence-guided refinement to propagate reliable motion cues into ambiguous regions
🏆 SOTA on Sintel & KITTI benchmarks and zero-shot generalization on Sintel, Spring and LayeredFlow.
Posts by F. Güney
Sadra is on 🔥 this week 🧿
we present FlowIt:
Global Matching for Optical Flow with Confidence-Guided Refinement
arxiv.org/abs/2603.28759
as someone who published her first flow paper 10 years ago, I cannot believe how good these results are:
works out of the box with 3DGS, NeRF, SVRaster, and more.
current repo includes 3DGS exp., NeRF and SVRaster support coming soon.
code: github.com/sadrasafa/Wa...
project: kuis-ai.github.io/WarpRF
arxiv: arxiv.org/abs/2506.22433
https://github.com/sadrasafa/WarpRF
Sadra released the code for WarpRF 🚀
a training-free uncertainty quantification framework for radiance fields based on multi-view consistency, without training or any changes to the model.
😂😂😂
but also part of me gets defensive and wants to shout, "vision for the sake of vision!" 😅
I love seeing the boundaries between vision and robotics disappear. vision feels most meaningful when it has a purpose, and robotics offers wonderfully unforgiving failure cases 🥹😂 I will show a few of them in my talk 😊
planning to visit Zurich on March 6. let me know if you would like to grab a coffee.
I will try to convince robotics researchers that they still need computer vision at the Robotics, Vision, and Controls Talks at ETH Zurich 😄
Super cool app! I'm in the "Machine Learning Researchers and AI Scientists" cluster. My nearest neighbour is @fguney.bsky.social; neighbours both here as well as our countries. Hi komşu!
while preparing for my ERC interview, I worked with a Turkish designer on my slides.
I loved his work so much that I recommended him to friends, who recommended him to others, and so on.
today he told me he’s been hired for 4 more projects.
your work really does become your reference 😇
cannot believe it worked 😅 thank you so much, I will.
I don’t know what stage of life this is, but recently, when I see something I want, I ask for it.
e.g., I asked for an invite to a prestigious robotics event in Switzerland.
most of the time it doesn’t work but that’s okay. I only go for things where the pain of rejection is worth it.
do I have any followers* here who happen to be ELLIS program/unit directors? to become an ELLIS scholar, I need their help with the nomination 🥰
*initially I said friends but then couldn’t stop laughing at the idea of me having friends who happen to be directors 😂
Our new E2E driving method, TransFuser v6, is out on ArXiv.
It outperforms all other methods on CARLA by a wide margin, 95 DS on Bench2Drive!
We show that minimizing the asymmetry between data annotator and policy is key for strong IL results.
Code, models, and paper:
ln2697.github.io/lead/
yeah I also think there are many opportunities for new hardware design. the book I copied the quote from mentions that eye not only sees but also understands based on our past experiences. this also makes seeing a very subjective experience, we don’t the see the same thing when we look at an object.
do you also feel bad when you realize you’re not only good at non-research tasks, but actually enjoy them?
like… I shouldn’t like report-writing. I should be writing papers.
and yet, talking about our achievements in the first ERC reporting period feels so good!
yeah but also general focus.
with image generators today, I feel like we’re going in the exact opposite direction of this.
After all, I am not interested in discerning the exact details of thousands of hairs in yellow contrasting with others in black; I just want to know it is a tiger and flee quickly.”
“Unlike a camera, which stores with equal resolution each bit of visual information, sight is highly directed. It is focused on capturing relevant information to convey meaning, not fidelity.
…
The 4th AI for Robotics workshop surfaced converging themes around embodied perception, task learning & evaluation methodologies - emphasising a shift to integrated, context-aware systems. Dive into our key takeaways #AI #Robotics #spatialAI
🥽 ➡️ tinyurl.com/bvxxcn5e
I have a feeling you’d enjoy this comment as much as I did 😂
I’m especially curious about the last one. tech bros already got their share of criticism in the first two lectures, but I have a feeling there’s more coming 🤭
• A Time of Monsters
• How To Start a Moral Revolution
• A Conspiracy of Decency
• Fighting for Humanity in the Age of the Machine
The Reith Lectures might be my favourite thing that comes out around this time of year. I’m really enjoying this year’s series, Moral Revolution by Rutger Bregman.
www.bbc.co.uk/programmes/a...
CC @chriswolfvision.bsky.social
ordered it! I wish more ppl shared books passionately.
glad my nightmares are entertaining to the community 😜
😂😂
I was there David 🥹 when these test of time awards are given, now I’m like “I was there when this was published.” 🥹
when you find yourself saying “if you’re old enough to know Fast R-CNN stuff” in a talk 🥹🥹