It seems, that we have failed the communication about IMC26. Let's try again.
The competition this year is here:
kaggle.com/competitions...
No prizes, but whole year leaderboard -- similar to KITTY and other academic competitions.
3D people, please retweet and share.
Posts by Georg Bökman
For people using superpoint for image matching, it might be interesting to know that I added a proof-of-concept steerer of superpoint here (making it possible to match rotated images with only one network pass per image): github.com/georg-bn/rot...
We're looking for a new colleague at @amlab.bsky.social: Assistant Professor in AI for Science 🔬🤖
World-class ML research, Amsterdam's thriving AI ecosystem (ELLIS, startups, big tech), and some of the best academic labor conditions in Europe ❤️
Deadline: May 30 👉 werkenbij.uva.nl/en/vacancies...
Yes I agree that this would likely accelerate learning of invariance, but I think it is unlikely that it would accelerate the descriptor training overall since your approach adds more network passes.
Sparse image matching is done via 1) keypoint detection in each image, 2) keypoint description, 3) matching of descriptions between images. Should rotation invariance be enforced at stage 2 or 3? Turns out both work fine! To be presented at the CVPR image matching workshop by @davnords.bsky.social
You don't imagine the future by mentally rendering a movie. You trace how things move -- abstractly, sparsely, step by step.
We built a model that does exactly this. It predicts motion, not pixels -- and it's 3,000× faster than video world models.
Myriad, accepted at
@cvprconference.bsky.social
bsky.app/profile/pars...
Happy Easter! Local Feature Matching has risen! arxiv.org/abs/2604.04931
My university (Chalmers University of Technology in 🇸🇪) is recruiting an assistant professor in data-driven cell & molecular biology, funded by the DDLS program @scilifelab.se #chemsky #facultychemjobs
The position comes with a nice start-up package
www.chalmers.se/en/about-cha...
Vibe graphing
That's an understatement :)
In our recent preprint, we show in more general settings how identifiability of a network means that equivariance implies layerwise equivariance. arxiv.org/pdf/2601.21645
If f is odd, then f(x)=−f(−x)=−Wₙσ(⋯σ(−W₁x+b₁)⋯)−bₙ. So there exist signed permutations Pₘ of the neurons such that P₁W₁ = −W₁, P₁b₁=b₁, and (for 1<m<n) PₘWₘPₘ₋₁ᵀ=Wₘ, Pₘbₘ=bₘ, and WₙPₙ₋₁ᵀ=−Wₙ, bₙ=−bₙ(=0). In other words, each layer satisfies an equivariance condition.
Given the function specifying a sigmoid-MLP f(x)=Wₙσ(⋯σ(W₁x+b₁)⋯)+bₙ, we can recover the weights/biases up to (unique) signed permutations of the neurons by looking at f over the complex numbers, see the quoted thread.
What happens if f is odd, i.e. f(x)=−f(−x)?
PS Fefferman's proof has several assumptions on the network that were later simplified by Vlačić & Bölcskei (arxiv.org/abs/2006.11727)
A further zoom, where you can see this better.
In the visualization above one can see that the network has three layers with tanh, because there are poles that are accumulation points of poles, that are accumulation points of poles.
In 1994 Fefferman proved that by analytically continuing a neural network with sigmoid-activations to the complex numbers, you can read off the weights of the network by looking at the pole structure of the function. www.math.stonybrook.edu/~bishop/clas...
🔮 Working on ML on curved manifolds? Don't miss out on Jacobi Fields! 🔮
I wrote a quick, highly visual and hopefully accessible introduction to the topic: "Jacobi Fields in Machine Learning" 🤠 Check it out here: olgatticus.github.io/blog/jacobi-...!
Today NeurIPS is announcing our official satellite event in Paris.
After responding to the call from Ellis following the success of EurIPS in December, we are pleased to reach a new milestone by joining forces with the NeurIPS organizing committee for the 2026 edition.
Why not call it Gembedding
👀
I've made a SatAst, a small collection of hand-annotated satellite to astronaut image correspondences, public on github: github.com/georg-bn/sat.... This benchmark is part of the RoMa v2 paper, see Johan's thread below. bsky.app/profile/pars...
Enrollment form strikes again. (OpenReview -> Tasks -> ECCV -> Author Enrollment Form)
I hate the ECCV template so much. Why can't it just die? Nobody prints proceedings into LNCS volumes anymore.
My favorite templates:
1. IEEE/CVPR
2. NeurIPS
...
...
39,2312,321. Edging tikz commands in morse code onto the skin of a dead walrus
...
...
Springer/ECCV
I think making the reviews and discussions public, at least for accepted papers, would be a good first step.
Sounds like a good approach!