The question isn't whether AI doom is likely.
It's whether the expected harm is significant enough to act on.
Given the math? The answer is clearly yes.
We don't need certainty to justify precaution. We need responsible risk assessment.
(7/7)
Posts by Daniel Arteaga
This same risk framework (expected value of harm) applies to non-catastrophic AI risks too:
- Algorithmic bias
- Economic displacement
- Privacy violations
- Misinformation at scale
If the product of probability and harm that matters.
(6/7)
www.ibm.com/think/insigh...
Most AI researchers agree existential risk from AI has LOW probability.
But low ≠ zero.
And when we're talking about existential outcomes, non-zero probabilities require:
- Serious research
- Robust safety measures
- Contingency planning
(5/7)
en.wikipedia.org/wiki/Safety_...
We have precedent for this thinking.
Before launching the Large Hadron Collider, CERN seriously studied scenarios like micro black holes destroying Earth.
Did physicists think it would happen? No. But the potential harm was so large, they HAD to investigate. (4/7)
en.wikipedia.org/wiki/Strange...
If potential harm approaches infinity, only a truly negligible probability makes the overall risk acceptable.
Low probability ≠ no risk when the stakes are existential.
(3/7)
en.wikipedia.org/wiki/Risk_as...
What matters is: Probability × Magnitude of Harm = Risk
When harm could be civilization-ending, even tiny probabilities demand serious attention.
(2/7)
en.wikipedia.org/wiki/Risk_ma...
The AI existential risk debate is often framed wrong. People focus on "how likely is this?" But that's not the right question.
(1/7) 🧵
en.wikipedia.org/wiki/Existen...
🏆 It's a pleasure to announce the winner of the Dolby Barcelona Scientific Paper Award 2025: Guillem Cortès Sebastià for PeakNetFP, a scalable neural audio fingerprinting system robust to extreme time stretching.
Details: professional.dolby.com/legal/barcel...
Am I missing something?
Yet in AI research (which is essentially statistical modeling)we routinely abandon these basic practices. The irony is striking.
In other scientific fields (natural and social sciences), proper statistical analysis is fundamental. You simply cannot publish without it.
There's also the added problem that metrics often don't correlate with perception. A 0.1 dB SDR improvement might be meaningless perceptually. But this issue has been discussed more often than the statistical rigor problem.
❌ Claims of "superior performance" based on point estimates alone
Example: Paper A reports 15.21 dB, Paper B reports 15.01 dB. Is this difference meaningful or just noise? Do those decimal places have any meaning? Usually impossible to tell from the paper.
❌ Values without error bars/confidence intervals
❌ Standard deviations sometimes quoted but no uncertainty estimates of means
❌ No significance testing whatsoever
❌ No effect size analysis
❌ No exploratory analysis beyond the mean
We're not even applying methods from first-year undergraduate physics—like reporting results with error bars. The problems I regularly see would make any physics professor cringe.
Coming from a physics background and being familiar with research methodology in fields like neuroscience, I'm struck by the lack of statistical practices in ML/AI papers. Most papers fail at basic statistical analysis that would be expected in other scientific fields. 🧵
At @waspaa.com in Lake Tahoe 🏔️ — great talks, great atmosphere, and maybe a bear or two 🐻.
Happy to present today our work on generative AI for RIR!
Dolby Barcelona Scientific Paper Award 2025 - deadline extended to 30th September due to ICASSP paper submission overlap.
Final call!
Details: professional.dolby.com/legal/barcelona-scientific-paper-award-2025/
Just 11 days left until the Sept 15 deadline and we've already received some excellent submissions.
Still working on yours? Don't wait: showcase your 2024-2025 sound research now.
New paper!
We're introducing a new way to generate realistic room impulse responses not from room geometry, but by directly controlling acoustic parameters like reverb time and direct-to-reverb ratio.
🔗 Demo: silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136
This work was the result of Silvia Arellano's internship in Dolby Barcelona with us.
Come explore the demo here:
🔗 silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136
Feedback & questions welcome!
We explore 4 DAC-based models:
1️⃣ AR w/ cross-attention
2️⃣ AR w/ classifier guidance
3️⃣ MaskGIT w/ adaptive layer norm
4️⃣ Flow matching
The MaskGIT model achieves the best subjective quality (avg. 70 MUSHRA score), beating state of the art comparisons.
Instead of simulating room geometry, we train four different generative model to produce RIRs conditioned on acoustic attributes (T30, T15, EDT, D50, C80, source-receiver distance)
New paper!
We're introducing a new way to generate realistic room impulse responses not from room geometry, but by directly controlling acoustic parameters like reverb time and direct-to-reverb ratio.
🔗 Demo: silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136
I'm happy to share that in the Dolby office in Barcelona we are offering an award for outstanding scientific papers in sound research. Open to students and early-career researchers with connections to Catalonia.
Deadline 15th September.
professional.dolby.com/legal/barcel...
Last winner, Eloi Moliner, pioneered diffusion models in AI for audio restoration. Could you be next?
arxiv.org/abs/2210.15228
I'm happy to share that in the Dolby office in Barcelona we are offering an award for outstanding scientific papers in sound research. Open to students and early-career researchers with connections to Catalonia.
Deadline 15th September.
professional.dolby.com/legal/barcel...
The key issue isn't the most likely outcome — it's the worst-case scenario we must be prepared for.
arxiv.org/abs/2401.02843