Why do larger AI models often perform better?
Because of Scaling Laws.
Researchers discovered predictable relationships between:
• model size
• dataset size
• compute
These scaling laws turned AI from experimentation into engineering.
#AI #ScalingLaws
X AI Summer: Who Controls the Machine God? #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
In Defense of Optimism with Packy McCormick #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
One of the most fascinating interviews I've heard on the DoW–Anthropic conflict, and I've heard a few now. Dean Ball doesn't hold back his language.
"Is it ok to swear on this podcast?"
"Hell, yeah."
pca.st/episode/173c...
#anthropic #DoW #DeanBall #ScalingLaws
The Pentagon Goes to War With Anthropic #scalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
Alan and Kevin join the Cognitive Revolution. #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
Release Schedules and Iterative Deployment with Open AI's Ziad Reslan #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
DLCM reframes LMs: learn semantic boundaries, reason in a compressed concept space, and decode back to tokens. +2.69% avg on 12 zero-shot tasks at matched FLOPs; new compression-aware scaling law + decoupled μP. Paper: huggingface.co/papers/2512.... #NLP #ScalingLaws #LLMs
A Year That Felt Like a Decade: 2025 Recap with Sen. Maroney & Neil Chilson #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
We will relcoate that data centers ( and build beer chips ) ASI AGI will have to wait till then #ScalingLaws
Cass Sunstein on What AI Can and Cannot Do #ScalingLaws
podcastaddict.com/scaling-laws... via @PodcastAddict
Scaling laws didn’t end — they hit a phase boundary.」
「Emergence is not magic. It’s geometry.
#AI #Emergence #ScalingLaws #PhaseGeometry #UEI
Scaling didn’t end — it hit a phase boundary.
The flattening isn’t failure.
It’s the moment a model reaches μ = μ_c and linearity collapses.
Beyond this point, scale stops predicting capability.
Geometry does.
#AI #ScalingLaws #Emergence
Scaling didn’t end — it hit a phase boundary.
The flattening isn’t failure.
It’s the moment a model reaches μ = μ_c and linearity collapses.
Beyond this point, scale stops predicting capability.
Geometry does.
#AI #ScalingLaws #Emergence
The AI Economy and You: How AI Is, Will, and May Alter the Nature of Work and Economic Growth with Anton Korinek, Nathan Goldschlag, and Bharat Chander #ScalingLaws
podcastaddict.com/scaling-laws...
UNIFIED EMERGENT INTELLIGENCE Redefining Al as emergent, autonomous intelligence akin to life.
The new preprint reframes AI beyond scaling—modeling intelligence as an emergent, self-maintaining system driven by phase transitions, autopoiesis, and resonance. Details here:
doi.org/10.5281/zeno...
#AI #EmergentIntelligence #ComplexSystems #AIE #ScalingLaws
Growth and Decay Share the Same Geometry
atstradingsolutions.com/the-innovati...
#FractalThinking #ComplexityScience #ScalingLaws #AdaptiveSystems
What does #ecology have to do with #innovation?
#Scalinglaws of innovation don’t just come from city size, they emerge from how ideas compete and spread.
Then, inequality is not a bug; it’s an emergent feature.
See doi.org/10.1038/s442... @svalver.bsky.social @sduran-nebreda.bsky.social
Synthetic Data Scaling Laws Show Limits Near 300B Tokens
New research reveals scaling laws for synthetic data hit practical limits around 300 billion tokens, suggesting diminishing returns beyond that point. getnews.me/synthetic-data-scaling-l... #syntheticdata #scalinglaws #tokens
Scaling Laws Reveal How Adding Experts Improves Large Language Models
A new study shows cross‑entropy loss follows a power‑law: larger base models lower the baseline and each added expert cuts loss by ~1/k, indicating diminishing returns. Read more: getnews.me/scaling-laws-reveal-how-... #scalinglaws #expertmodels
Rapid Response: California Governor Newsom Signs SB-53 #ScalingLaws
podcastaddict.com/scaling-laws...
Scaling Laws Show Limits of Larger Language Models for Reasoning
Larger language models can overfit; a new study finds the optimal reasoning capacity is about 0.008 bits of information per parameter, and bigger models may reduce edge‑completion accuracy. getnews.me/scaling-laws-show-limits... #scalinglaws #reasoning
Scaling Laws and Spectral Patterns in Shallow Neural Networks
Analysis links scaling‑law exponents of quadratic or diagonal neural nets to sample size and weight decay, revealing distinct risk regimes and weight‑spectrum shapes. DOI: 10.48550/arXiv.2509.24882 getnews.me/scaling-laws-and-spectra... #scalinglaws #nn
New Scaling Laws Reveal Predictable Gains from Model Merging in LLMs
Researchers unveiled a compact power-law scaling rule linking base model size and number of experts (k), showing gains drop about 1/k. Paper submitted September 2025. Read more: getnews.me/new-scaling-laws-reveal-... #modelmerging #scalinglaws #llms
Pretraining Scaling Laws Forecast Generative Model Performance
The compute‑based and parameters‑and‑tokens scaling laws stabilize after roughly 1.5–2.5 orders of magnitude, while the gold‑reference likelihood law stays stable across about five orders. Read more: getnews.me/pretraining-scaling-laws... #scalinglaws #ai
Scaling Laws Show Bigger Data and Models Improve Material Predictions
Material‑property prediction error follows a power‑law L = α·N⁻ᵝ; larger transformers and EquiformerV2 models consistently achieve lower loss with more data. getnews.me/scaling-laws-show-bigger... #scalinglaws #materialscience
AI and Young Minds: Navigating Mental Health Risks with Renee DiResta and Jess Miers #ScalingLaws
podcastaddict.com/scaling-laws...
¡El futuro del entrenamiento de LLMs está aquí! 🚀 Logra 5.17X más eficiencia de datos con Regularización óptima y Ensembling. Clave para modelos más potentes en un mundo compute-rich. Descubre las #ScalingLaws para #EficienciaDeDatos en #LLMs. #AI youtu.be/oeMxvT38XXc
A framework to build AI scaling laws for cost-efficient LLM training, helping teams get the most out of limited compute budgets.
#LLM #AITech #AIEfficiency #ScalingLaws #ArtificialIntelligence
💡 Key insight: smarter scaling = more performance per dollar.
Read more 👇
news.mit.edu/2025/how-bui...
The result is a fair, end‑to‑end comparison that isolates what actually drives performance for radiology foundation models.
#AI #MedicalImaging #FoundationModels #ScalingLaws #Radiology