my self-discipline when there’s cake
Posts by Rémi Flamary
Huge congrats @vickykalogeiton.bsky.social for being awarded the médaille de bronze in Computer Science!👏🥳🎉🍾
Very well deserved for all the hard work and fantastic ideas!
www.ins2i.cnrs.fr/fr/talents/c...
@cnrsinformatics.bsky.social @ipparis.bsky.social @ecolepolytechnique.bsky.social
🚨 arxiv.org/abs/2604.06129
PoM: A Linear-Time Replacement for Attention with the Polynomial Mixer
This paper is the result of doing a lab-wide hackathon on an idea I've had for some time. Probably the paper with the highest number of authors I've ever done.
It's a CVPR Findings 26.
Thread 🧵👇
Or course you are the one listing the initialization ! 3407 to the rescue 🛟
Richard Sutton talks about this
This editorial discusses the critical value of human-generated scientific writing in the era of large language models (LLMs), arguing that writing is essential to structured thinking and research comprehension. Writing as Thinking: The act of writing structure's thoughts, sorting research data, and identifying the main message, unlike LLMs which may lack true understanding or accountability. LLM Hallucinations: LLM-generated text requires rigorous verification because these models can produce incorrect information or fake references. Human vs. AI Roles: While LLMs are useful tools for brainstorming, improving grammar, or overcoming writer's block, human researchers must maintain control to engage in the creative task of shaping a compelling narrative.
Writing forces your brain to coordinate memory, reasoning, and meaning-making simultaneously.
Every time you write, you rewire toward clearer thinking. Every time you let an LLM do it, you rewire toward consumption.
The point is not that you can't learn from Claude or both Claude and other humans. It is that when using it to explain simply everything, or even do the grunt work, you do not learn as much as when actually doing it. Filtering noisy feedback is actually hard to learn.
Thank you David that is a very well though out text and I agree with everything in it. Grunt work, failing and trying to understand what failed is how you learn and understand. We need to find a way for the next generation of scientist to do it and not leave it to LLM or we fail as educators
Related read: ergosphere.blog/posts/the-ma...
Copy of the section in copilot setting that allows you to disable training on your local data
Remember to do that before April 24th if you don't want your data (and behavior wrt copilot) to be used for training. They already have our code and papers, it should be enough.
A comic in four panels: Panel 1. Cepper, the Gothic Sorceress, sits at her workbench in the basement of the university with her iconic clothes and glasses on her head. She is surrounded by steampunk cogs, wires, circuits, and code snippets written on parchments. She's determined in front of her masterpiece, her own local AI Parrot looking like a big pigeon. > Cepper: "I did it! Running 100% locally now. My own machine, my own terms! hehe." Panel 2. She excitedly asks it a question, but it takes an eternity to respond. > Cepper: "Avian Intelligence, what's the airspeed velocity of an unladen swallow?" > Local AI Parrot: "1... 1... m... e... t... e... r... s... loading 2%" Panel 3. Cepper starts to realize the immense computational power required to run AI models remotely. She looks at her local AI Parrot and starts to wonder. > Cepper: "Ouch. That's painfully slow, even with the largest magical stone I had!" > Local AI Parrot: "p...e...r... s...e...c...o...n...d... loading 4%" Panel 4. A shot late at night, she sleeps deeply on a big armchair, while the local AI Parrot still finish to output. > Local AI Parrot: "a...n...d... t...h...a...t...s... a...l...l... loading 100%"
The Local Alternative
#webcomic #krita #miniFantasyTheater
I'm totally biased on this but I think it's wonderful that we now have official NeurIPS parallel satellite events. Parallelization was the trick that allowed us to scale to large data and models. It makes sense to try it for conferences as the community grows.
You will find all official information on the NeurIPS website and on our social networks.
NeurIPS Europe is already supported by SSFAM, Hi! Paris, and PRAIRIE-PSL.
Satellite General Chairs: Linus Bleistein, Olivier Cappé, Laetitia Chapel, Edwige Cyffers, Rémi Flamary, Pierre Marion
Today NeurIPS is announcing our official satellite event in Paris.
After responding to the call from Ellis following the success of EurIPS in December, we are pleased to reach a new milestone by joining forces with the NeurIPS organizing committee for the 2026 edition.
Following the success of the EurIPS and NeurIPS-Mexico City pilots in 2025, we are thrilled to announce two official NeurIPS 2026 satellite events for this year!
These will be held in Paris, France and Atlanta, USA, respectively, running alongside the main venue in Sydney, Australia.
They described the process and it seems to be robust with reviews that were all checked by a human too. They did their best to minimize false positive. Reviews that were rejected in my pool were all bad and I had already contacted as AC the reviewers to ask them to improve them.
"we are removing the affected reviews and desk rejecting the 497 papers where a violating reviewer was serving as a reciprocal reviewer---approximately 2% of submissions in total." I'm very happy the ICML PC take a hard stance against reviewers not respecting LLM reviewing policy.
👋 Meet the ELLIS Board
This episode features Florence d’Alché-Buc, Prof at @telecomparis.bsky.social 🇫🇷 & ELLIS Board Member. She shares her views on data vs. algorithms, causality, foundation models, and how AI systems should be evaluated.
Watch the video: youtu.be/kgLCpIUEwiY
Happy to share a major milestone: after years of development, we are officially launching Version 1.0 of the GeometricKernels library!
To top it off, our accompanying paper has just been published in JMLR (MLOSS)! 🎉
github.com/geometric-ke...
This is such a good illustration! Too bad I didn't have the idea when I wrote that silly little paper a few years ago.
Anyway, remember folks: torch.manual_seed(3407) is all you need
No other seed had been subject to as much scrutiny as this one
And it's all yours for free!
This lunch made me feel old and irrelevant like a kernel machine guy at ICLR
It reminds me of a recent lunch with @rflamary.bsky.social and his team, where we were discussing Highlander. Then I asked the PhD students and postdocs if they knew the movie (not even the series) and... total blank. They had no idea.
One week later, same story with Buffy the vampire slayer 🤣
Dans le Canard de cette semaine.
Calvin and Hobbes! Making me feel bad about my job as a ML researcher.
But we say we do it because it's practical only because nobody except the geek sees the beauty of it.
This semester I have been given the chance to teach a course on "Deep Learning for Time Series" at @ipparis.bsky.social
If you're interested in the topics, have a look at rtavenar.github.io/x-dl4ts/
Repo has Typst code for the slides and Python code for the labs.
Feedback is very much welcome.
This year I'm teaching a new course on generative models for visual content (images, video, 3D, etc). It's mostly me rambling about recent papers, design choices I like/hate. The slides of the first lectures are here: davidpicard.github.io/teaching/
Use right arrow to navigate past the blank page.
Probably if you give it meaningful metrics such as compile time and execution speed on benchmarks. But be careful what you wish for. Correct a compilation is difficult to check
To be clear ccc is an impressive achievement of agentic AI but I think we should be careful of computational and energy usage and do efficient and optimized code. Being cheap in Person.Month is short-sighted if resources are lost.