Knowledge Graph Foundation Models (KGFMs) are at the frontier of graph learning - but we didn’t have a principled understanding of what we can (or can’t) do with them. Now we do! 💡🚀🧵
With Pablo Barcelo, Ismail Ceylan, @mmbronstein.bsky.social , @mgalkin.bsky.social, Juan Reutter, Miguel Romero!
Posts by
🚨Our paper on how the cerebellum learns to drive cortical dynamics for rapid task learning and switching, which we propose can then be consolidated in the cortex @naturecomms.bsky.social
nature.com/articles/s41...
🧠 #compneuro
An excellent overview of mechanistic interpretability.
youtu.be/UGO_Ehywuxc
HBO has declined to renew “Sesame Street” for new episodes. The series that’s been teaching generations of little kids since 1969 now has no studio.
Please consider donating to Sesame Workshop to ensure the residents of 123 Sesame Street are still around to teach kids of all needs and backgrounds.
Interested in AI for scientific discovery? Our research team has four workshop presentations at NeurIPS that span LLM mechanistic interpretability, graph neural networks, and diffusion models -- all presented today!
A 🧵 of our results below (each paper is linked):
Normalizing Flows are Capable Generative Models
Apple introduces TarFlow, a new Transformer-based variant of Masked Autoregressive Flows.
SOTA on likelihood estimation for images, quality and diversity comparable to diffusion models.
arxiv.org/abs/2412.06329
Entropy is one of those formulas that many of us learn, swallow whole, and even use regularly without really understanding.
(E.g., where does that “log” come from? Are there other possible formulas?)
Yet there's an intuitive & almost inevitable way to arrive at this expression.
Multiplicative noise is good! 🎲
Just make your neural network weights noisy (like 🧠?) and reap the benefits of robustness to corruptions with no loss on clean data.
🌟Spotlight paper at #NeurIPS2024 led by Trung Trinh & w/ Markus Heinonen and @samikaski.bsky.social
trungtrinh44.github.io/DAMP/
One of the best tutorials for understanding Transformers!
📽️ Watch here: www.youtube.com/watch?v=bMXq...
Big thanks to @giffmana.ai for this excellent content! 🙌
I really enjoyed your Designing Machine Learning Systems book, it’s a fantastic wide ranging treatment. Looking forward to reading AI Engineering.
Super happy to reveal our new paper! 🎉🙌♟️
We trained a model to play four games, and the performance in each increases by "external search" (MCTS using a learned world model) and "internal search" where the model outputs the whole plan on its own!
We are organising the First International Conference on Probabilistic Numerics (ProbNum 2025) at EURECOM in southern France in Sep 2025. Topics: AI, ML, Stat, Sim, and Numerics. Reposts very much appreciated!
probnum25.github.io
Samples y | x from Treeffuser vs. true densities, for multiple values of x under three different scenarios. Treeffuser captures arbitrarily complex conditional distributions that vary with x.
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.
paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...
🧵(1/8)
I think you have to do this analysis using line level data to make sure you are isolating the appropriate subpopulation. There’s line level data globally (at least there used to be) and it should still be out there on the CDC website. Japanese prefecture level data should be good for this too.
A collage of book covers of new nature & science books. Featured here are Atlas Obscura Wild Life, Dinosaurs at the Dinner Party, Every Living Thing, Alien Earths, Becoming Earth, and Deep Water
A collage of book covers of new nature & science books. Featured here are Frostbite, The Inner Clock, How to Kill an Asteroid, The Great River, The Last Fire Season, and Hoof Beats
A collage of book covers of new nature & science books. Featured here are The Serviceberry, Not the End of the World, Nature's Ghosts, Meet the Neighbors, The Light Eaters, and Our Moon
A collage of book covers of new nature & science books. Featured here are Turning to Stone, The Weight of Nature, What If We Get It Right?, Waves in An Impossible Sea, The Tree Collectors, and Why We Remember
I was deeply disappointed by the lack of nature/science/climate/enviro on many major end-of-year book lists—so I decided to make my own!
Introducing: ✨🎁📚 The 2024 Holiday Gift Guide to Nature & Science Books ✨🎁📚
Please share: Let's make this go viral in time for Black Friday / holiday shopping!
Bayesball! www.biorxiv.org/content/10.1... The Bayesian nature of baseball. From Brantley & @kordinglab.bsky.social
An employee of Huggingface, a site of AI training datasets, made a dataset of a million Bluesky posts scraped simply because they could. It’s currently trending: www.404media.co/someone-made...
There’s a single formula that makes all of your diffusion models possible: Tweedie's
Say 𝐱 is a noisy version of 𝐮 with 𝐞 ∼ 𝒩(𝟎, σ² 𝐈)
𝐱 = 𝐮 + 𝐞
MMSE estimate of 𝐮 is 𝔼[𝐮|𝐱] & would seem to require P(𝐮|𝐱). Yet Tweedie says P(𝐱) is all you need
1/3
For those who missed this post on the-network-that-is-not-to-be-named, I made public my "secrets" for writing a good CVPR paper (or any scientific paper). I've compiled these tips of many years. It's long but hopefully it helps people write better papers. perceiving-systems.blog/en/post/writ...
NeurIPS Conference is now Live on Bluesky!
-NeurIPS2024 Communication Chairs
Postdoc opportunities! The Johns Hopkins Data Science and AI Institute has a new postdoc program!
We’re looking for candidates across data science and AI, including science, health, medicine, the humanities, engineering, policy, and ethics.
Spread the word and apply!
ai.jhu.edu/postdoctoral...
I'm slowly putting my intro to ML course material on github, starting with the lab sessions: github.com/davidpicard/...
These are self-contained notebooks in which you have to implement famous algorithms from the literature (k-NN, SVM, DT, etc), with a custom dataset that I (painstakingly) made!
Screenshot of the paper.
Even as an interpretable ML researcher, I wasn't sure what to make of Mechanistic Interpretability, which seemed to come out of nowhere not too long ago.
But then I found the paper "Mechanistic?" by
@nsaphra.bsky.social and @sarah-nlp.bsky.social, which clarified things.
Since this platform is finally attracting a critical mass of ML researchers, here's our recent work on prompt-based vulnerabilities of coding assistants:
arxiv.org/abs/2407.11072
TL;DR — An attacker can convince your favorite LLM to suggest vulnerable code with just a minor change to the prompt!
Screenshot from post around positional encoding
I still have to finish reading this post but it’s the first time even since the transformer paper I feel like grok what “positional encoding” really is.
fleetwood.dev/posts/you-co...
Wonderful post - there is a reality on the ground and the statistics around government. Never have the two felt so detached, and democrats are obsessed with the latter (to their detriment) | Trump Didn't Deserve to Win, But We Deserved to Lose www.joshbarro.com/p/trump-didnt-deserve-to...
“The gap between Democrats’ promise of better living through better government and their failure to actually deliver better government has been a national political problem.”
An honest and refreshing reflection.
Really interesting pre-print on using collaborative filtering to detect anomalies in patterns of movement / mobility.
arxiv.org/abs/2409.18427