That’s exactly where I was at that time
Posts by Kyle Cranmer
Got caught in a tornado on my drive home from FermiLab. Scary.
Neutrino beam going through that white spot in the wall… sure, if you say so.
Visiting @fermilab.bsky.social to give a “Wine & Cheese” seminar on Emerging Patterns in AI for Science.
The shot heard around the (US scientific research) world
#GenesisAmendment000002
It was great to have @tylerjamesburch.com of the RedSox talk about sports analytics! He’s holding a CD with the first release of R signed by the core developers
Later today we will have Alyssa Travitz talking about the role of Research Software Engineering in Molecular Sciences, pharmaceutical, etc.
Oh no!!! I’m sorry
Nothing bad… actually good, it’s just that it’s dominating my time / inbox.
Genesis, Genesis, Genesis, Genesis, ....
FML
It's uncanny.. this is from the article.
Here's the @quantamagazine.bsky.social article. Galashin forgot to cite me 😂
www.quantamagazine.org/origami-patt...
For April fools day in 2014 I wrote a paper called The Realineituhedron in reference to Amplituhedron. This was one of the figures. In October @QuantaMagazine did a piece connecting the Amplituhedron to origami 😂 theoryandpractice.org/2014/04/The-...
You’d be surprised, there is a connection
While the sketch of the proof in the appendix of this early draft needs to be tightened up, I'm amazed that this approach actually has several optimality properties. Ones that you don't get with the traditional approach with fixed, task-agnostic representations.
I also really like the differential optimization layer. It reminds me of a few years ago during when 'differentiable programing' was trending and there was a lot of innovation in non-standard differentiable layers. Here we don't need to differentiate through Opt(), but still...
Another thing that I like is that you can either see the scalar bottleneck as a puny 1-d scalar, or you can see it as an infinite dimensional function. The representation is really a function, so it has enormous representational capacity.
There are many cool things here. One is that this approach makes it very easy to append new modalities or i.i.d. observations. Each modality can be trained independently, which has enormous practical implications for distributed scientific collaborations.
🚨I'm happy to share a preview draft of new paper "Scalars Are All You Need for Multimodal Inference". Instead of the traditional approach to multimodal foundation models for science with task-independent embedding, I outline an alternative strategy
theoryandpractice.org/2026/04/scal...
I thought this was pretty funny while on vacation, but realizing that the email is going to find me tomorrow when I go back to work.
How to make Physics professors cry
🆕🌟Introducing Machine Leaning: Science and Technology's first roadmap 👉 Roadmap on fast machine learning for science.
Unlock the roadmap: https://ow.ly/sVO950Ywji3
#machinelearning #ai #fastmachinelearning
We’ve had a great response to RABBIT. Shout out to our lead developer Jason Lo
Happy π day
youtu.be/fsLh-NYhOoU?...
Michael Botts tells us about Proteus at
UW-Madison Great Lakes Bioenergy Research Center
energy.wisc.edu/news/autonom...
Matt Sinclair talks about simulating computer architectures from hardware through the application layer (GPUs to pytorch) with gem5
gem5.org
pages.cs.wisc.edu/~sinclair/
I’m very excited about this event, which we conceptualized this summer.
Next up, Open Hardware a trailblazer Kevin Eliceiri on optical imaging
@kscottz.bsky.social kicks off Build, Create, Share:
Fostering Innovation and the Role of Open Source Hardware
indico.global/event/17245/ti…
ospo.wisc.edu