Amazing and well deserved.
Posts by Jan van Gemert
Relevant in our Meta-science for ML workshops π
metascienceforml.github.io
And our just accepted ECCV workshop on "Empirical Theory in Representation Learning" π
Strevens knowledge machine in action!
Unifying Popper's penalizing Pendulum; and Kuhn's Paradigmatic Pendulum aligner.
www.strevens.org/scientia/
Progress in academia
How many matches to estimate a Heavenography?
Was the topic sufficiently... Matching? ππ
Like Eliza, LLMs, and horoscopes: the interpretation is done by a human who is excellent at meaningfication :)
We're looking for a new colleague at @amlab.bsky.social: Assistant Professor in AI for Science π¬π€
World-class ML research, Amsterdam's thriving AI ecosystem (ELLIS, startups, big tech), and some of the best academic labor conditions in Europe β€οΈ
Deadline: May 30 π werkenbij.uva.nl/en/vacancies...
"Through the lens of coincidence"
Had a nice rhyme to it π (especially when pronounced with a French accent π)
Great minds think alike?
(Or.. rethinking if a great mind is all you need ;) )
Does the same hold for science organisation by university?
- The bitter lesson: Richard Sutton's view on AI
- The butter lesson: more butter makes better food @gracekind.net
- The batter lesson: crΓͺpes are better than pancakes
- The better lesson: AI can potentially improve teaching
- The botter lesson: genAI takes over social media
In this incredibly detailed and gracious review of βThe Irrational Decision,β @himself.bsky.social situates the book in the web of intellectual history and the war between the AI enthusiasts and skeptics.
This editorial discusses the critical value of human-generated scientific writing in the era of large language models (LLMs), arguing that writing is essential to structured thinking and research comprehension. Writing as Thinking: The act of writing structure's thoughts, sorting research data, and identifying the main message, unlike LLMs which may lack true understanding or accountability. LLM Hallucinations: LLM-generated text requires rigorous verification because these models can produce incorrect information or fake references. Human vs. AI Roles: While LLMs are useful tools for brainstorming, improving grammar, or overcoming writer's block, human researchers must maintain control to engage in the creative task of shaping a compelling narrative.
Writing forces your brain to coordinate memory, reasoning, and meaning-making simultaneously.
Every time you write, you rewire toward clearer thinking. Every time you let an LLM do it, you rewire toward consumption.
If we train an LLM for suggesting missing scientific citation we should call it JΓΌrgen π
Oh, we have our lab retreat starting at exactly that date π..
Too bad, i would have loved to come; it's a great initiative!
π¨ Happy to announce CVPR@Paris'26 which will take place on June 1st in Paris. The goal of the event is to share a little bit of the conference before it happens. We will have poster sessions as well as several plenary talks by world-class speakers.
info: cvprinparis.github.io/CVPR2026InPa...
All operations are still translation equivariant modulo the stride.
It's only the input to the equivariant operators that's been forced to be translation dependent.
Back to multiplications and additions; what more do you need?
(Besides a good init, obviously ;) )
Indeed, the "bye bye convolutions" narrative was quite tiring.
The MLP shares parameters for each token, where tokens are created by a sliding window.
Ie: ViTs use convolutions all the way down; interleaved with self-attention.
Collectively submit fewer papers? π
/s
Wonderful! I think the glitches add flavor π
What are "natural regularities"? π.
Ie, how to construct a controlled dataset to evaluate this (interesting!) hypothesis in a confounder-free setting?
(Ie: with natural regularities it learns, but without them it fails, where the only source of variation is in natural regularities yes/no)
Excellent overview, and critical thinking π
Do you mean this Li et al.?
neurips.cc/virtual/2025...
I find the mix between feature binding, and object binding confusing, ie: their "IsSameObject" can also be done with an edge detector (inside vs outside object), no feature binding required.
They guide me to see something that may, or may not be there. It's quite surreal, and I like them a lot π
Healthier than day-drinking ;)
Great post about how and why do science (and winning a best paper award is not the goal).