Aravind Srinivas (Perplexity) tells Lex all the ways in which I was right against the prevalent ideas of the time: DL, ConvNets, energy-based models, SSL, the limitation of RL, and now the limitations of auto-regressive generative models including LLMs.
Thanks Aravind!
youtu.be/mnGUfkMt9fE?...
Posts by Yann LeCun
The total distance flown passengers on US airlines since the last crash is 2.3 light-years.
That's about 2.2*10^13 km. The last crash was on Feb 12, 2009.
ourworldindata.org/us-airline-t...
I didn't even realize the @ylecun account existed.
I must have created a long time ago and forgot.
Well, I'm here under @yann-lecun.
But there is another account @ylecun registered under the same email address.
I can't seem to log into @ylecun.
If I try and reset the password, it resets it for @yann-lecun.
I'm not sure what to do.
I suspect I need to just delete the @yann-lecun account....
Another thing to be thankful for: the use of AI to help improve climate change models.
From The Economist: www.economist.com/science-and-...
Free access on archive.org: archive.is/N9uaF
Topics: the history of AI, science and engineering, what is intelligence, GOFAI and neural nets, how does machine learning work, convolutional nets and transformers, self-supervised learning, LLMs and their limitations, why are JEPAs and why we need them, advice to students and entrepreneurs.
Excellent podcast with Nikhil Kamath in which we cover a lot of topics related to AI and deep learning.
www.youtube.com/watch?v=JAgH...
WaPo Edito: Be thankful for the applications of AI in medicine.
More accurate detection of cancers (breast, prostate, skin, brain), faster diagnosis of strokes, sepsis, heart attacks, faster MRIs, full-body in 40 minutes.
Much more to come over the next years.
www.washingtonpost.com/opinions/202...
Video of my chat with Gaurav Agarwal on the future of AI, world models, planning/reasoning, the limitations of LLMs, AI as an empirical science, why neural net were shunned.
And how countries could jumpstart their AI ecosystem by creating top industry research labs.
youtu.be/4V_cJX8sVeM?...
Auto-Regressive LLMs (auto-encoders with causal transformer architectures) and BERT-style models (denoising auto-encoders with transformer archis) are smashing demonstrations of the power of self-supervised (pre-)training.
But they only work for sequences of discrete symbols: language, proteins...
I first showed this "cake" slide in 2016.
“Sky Fortress” is a Ukrainian system of thousands of microphones, listening for the distinctive sounds of Russian low-flying UAVs such as the Shared.
They are networked to machine-learning processing nodes, that can send a truck with a .50 cal. gun to an intercept point.
1/3
@ylecun.bsky.social
Hi there, bsky dwellers!
Here is Messier 51, AKA the Whirlpool Galaxy.
Shot from a New Jersey backyard with a Celestron EdgeHD 11", 0.7 reducer, ASI6200MM monochrome camera, off-axis guider.
131 subs, 300 seconds each.
HSO mapped to RGB.