Nice to see another fully open, multimodal LM released! Good license, training code, pretraining data, all here.
LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training
Slowly, the community is growing.
arxiv.org/abs/2509.236...
Posts by Walter Hernandez
It's been three years now of nothing by LLMs in every NLP conference (and a large chunk of the ML venues too).
LLMs are fascinating, but is there really nothing else worth researching in NLP anymore?
Only a quarter of AI initiatives have delivered the expected return on investment, according to a survey of 2,000 CEOs.
Companies are struggling to get value from #GenAI. Most of the adoption of the technology is based on FOMO.
#AIEthics
www.theregister.com/2025/05/06/i...
"Science is an investment.
We will put forward a new 500 million package for 2025-2027 to support the best and the brightest researchers and scientists from Europe and around the world."
— President @vonderleyen.ec.europa.eu at the ‘Choose Europe for Science' event at La Sorbonne 🇫🇷
A new paper, "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?", has people reconsidering if the RL we're hearing about really works.
It shows RL elicits from the models, but as we get better verifiers we may not need to rely on RL as much.
Good read.
Multi-node, multi-GPU training is pretty easy with torchrun, just a few extra lines of code. Putting this out there into the world so people don't shy away from it
The paper also talks at some length about "sandbagging". I’d previously encountered sandbagging defined as meaning “where models are more likely to endorse common misconceptions when their user appears to be less educated”. The o3/o4-mini system card uses a different definition: “the model concealing its full capabilities in order to better achieve some goal” - and links to the recent Anthropic paper Automated Researchers Can Subtly Sandbag. As far as I can tell this definition relates to the American English use of “sandbagging” to mean “to hide the truth about oneself so as to gain an advantage over another” - as practiced by poker or pool sharks. (Wouldn't it be nice if we could have just one piece of AI terminology that didn't attract multiple competing definitions?)
Wrote up some notes on the o3/o4-mini system card, including my frustration at "sandbagging" joining the ever-growing collection of AI terminology with more than one competing definition simonwillison.net/2025/Apr/21/openai-o3-an...
A new study has found that the universe might be spinning. What does that even mean? Let’s have a look.
www.youtube.com/watch?v=Gm5n...
Time to remind ourselves of some observations about how trade appears to help stabilize alliances and prevent international conflict www.gsb.stanford.edu/insights/mat...
“Can you draw a photorealistic beach with no elephants?”
In my latest column for Science magazine, I discuss recent AI "reasoning" models -- how it works, to what extent it captures "genuine" reasoning processes, and what's needed to answer such questions.
www.science.org/doi/10.1126/...
This is absurdly great, but I haven't read a single news article about it. A fully open source, offline-first alternative to Notion that's a collab between the French and German governments because they want to host docs securely and on their own terms. THIS is what Europe should be doing.
Image description: A dark blue graphic with a bright blue box on it with text reading ' ‘WildPose is the culmination of a number of years of discussions that Amir and I had about how we could revolutionise the way wildlife can be tracked and monitored in 3D with minimal disturbance... I believe WildPose is a first step towards an exciting new era of rich 3D data from the wild.’ Professor Andrew Markham'. To the left of the text there is a circular picture of Professor Andrew Markham smiling at the camera. Beneath this, there are bright blue lines with dots attached that looks like a circuit board. At the bottom of the graphic there is white text reading '@compscioxford #CompSciOxford'.
Oxford researchers have helped develop WildPose, a groundbreaking system using LiDAR & high-speed imaging to track wildlife in 3D from over 100m away. Capturing fine details like a lion’s breathing, it offers new insights into animal movement without invasive methods. www.cs.ox.ac.uk/news/2430-fu...
Is everyone now okay with using the term "thinking" to describe what LLM "reasoning" models do? And to call their outputs "thoughts"?
From OpenAI blog posts:
Happy Pi Day!
A lot of people lately are conflating novelty with unfamiliarity.
It explains all the responses of "this isn't new" to explanatory pieces which aren't claiming to be presenting new information. They're just trying to increase awareness.
So one good thing that seems to be happening right now is that a new end-to-end encryption standard "MLS" seems to be gaining a lot of momentum. Like, a lot.
And from what I understand this is an important step there as well, because RCS' encryption is MLS. Security folks correct me if I'm wrong
Wow, this seems to be extremely easy to code and extremely useful.
Transformers without Normalization
Jiachen Zhu, Xinlei Chen, Kaiming He, Yann LeCun, Zhuang Liu
arxiv.org/abs/2503.10622
NEW 🧵 Is human intelligence starting to decline?
Recent results from major international tests show that the average person’s capacity to process information, use reasoning and solve novel problems has been falling since around the mid 2010s
What should we make of this?
www.ft.com/content/a801...
We'll commit to a slice 🥧
Happy Pi Day!
"Junk papers proliferate at vanity journals and legitimate ones alike, due in part to the “publish or perish” ethos that pervades the research enterprise, and in part to the catastrophic business model that has captured much of scientific publishing since the early 2000s."
It is so strange that we have to figure out how (or even whether) our latest software does critical functions that would normally have to be carefully designed.
More like biology or psychology than computer science.
“We should stop training scientists now. It’s obvious that within three years, AI is going to do better than Nobel Laureates.”
is the new
“We should stop training radiologists now. It’s just completely obvious that within five years, deep learning is going to do better than radiologists.”
A scatter plot comparing AI model performance on MMLU-Pro against latency in milliseconds per token. The x-axis represents latency (milliseconds per token), and the y-axis represents performance (MMLU-Pro score). - **Mistral Small 3** (highlighted in orange with a castle emoji) is positioned in the upper-left region, indicating high performance and low latency. - **GPT-4o Mini** is slightly lower in performance but has higher latency. - **Qwen-2.5 32B** is positioned higher in performance but with greater latency. - **Gemma-2 27B** has lower performance and the highest latency among the models. The benchmark is based on Apache 2.0 models using vLLM with a batch size of 16 on 4xH100 GPUs, with GPT-4o Mini data sourced from OpenAI's API.
Mistral Small 3
A 24B LLM that's VERY fast with great function calling
More important, MISTRAL IS OPEN SOURCE AGAIN!!!!!!
mistral.ai/news/mistral...
Interesting paper that tests GPT-4o’s ability to handle financial predictions and finds weak numeric reasoning & that a lot of apparent ability is actually due to memorized training data. At the same time, they show promise when combined with tool use. papers.ssrn.com/sol3/papers....
@pfrazee.com Desiring so much a bookmark option. Do you know if it is something that may come in the near future?
is the academic ML paper publishing cycle is just a very unoptimized form of grid search for what models work best and is there One True Model we will eventually converge on
There's a lot of enthusiasm in the community about transformers trained on chemical or biological data.
Here's some interesting results and some thoughts on future directions.
$15,000 in prizes for Deep Tech innovations, anyone?
Introducing Exponential Science Pioneers Award! 🏆
Do you know a groundbreaking research paper in DLT, AI, IoT, Quantum, Spatial Computing or other emerging digital technologies?
👇
I see a lot of (correct) complaints that AGI and agents are badly defined. This problem will not be solved because:
1) AGI and agents inherently rely on comparisons to humans, and we don't have good definitions of human agency or general ability
2) Marketing is incentivized to blur any definitions