Advertisement · 728 × 90

Posts by Daniel Arteaga

Post image

The question isn't whether AI doom is likely.

It's whether the expected harm is significant enough to act on.

Given the math? The answer is clearly yes.

We don't need certainty to justify precaution. We need responsible risk assessment.

(7/7)

4 months ago 1 0 0 0
Preview
10 AI dangers and risks and how to manage them | IBM A closer look at 10 dangers of artificial intelligence and actionable risk management strategies to consider today.

This same risk framework (expected value of harm) applies to non-catastrophic AI risks too:

- Algorithmic bias
- Economic displacement
- Privacy violations
- Misinformation at scale

If the product of probability and harm that matters.

(6/7)

www.ibm.com/think/insigh...

4 months ago 2 0 1 0
Preview
Safety engineering - Wikipedia

Most AI researchers agree existential risk from AI has LOW probability.

But low ≠ zero.

And when we're talking about existential outcomes, non-zero probabilities require:

- Serious research
- Robust safety measures
- Contingency planning

(5/7)

en.wikipedia.org/wiki/Safety_...

4 months ago 1 0 1 0
Strangelet - Wikipedia

We have precedent for this thinking.

Before launching the Large Hadron Collider, CERN seriously studied scenarios like micro black holes destroying Earth.

Did physicists think it would happen? No. But the potential harm was so large, they HAD to investigate. (4/7)

en.wikipedia.org/wiki/Strange...

4 months ago 0 0 1 0
Risk assessment - Wikipedia

If potential harm approaches infinity, only a truly negligible probability makes the overall risk acceptable.

Low probability ≠ no risk when the stakes are existential.

(3/7)

en.wikipedia.org/wiki/Risk_as...

4 months ago 0 0 1 0
Preview
Risk matrix - Wikipedia

What matters is: Probability × Magnitude of Harm = Risk

When harm could be civilization-ending, even tiny probabilities demand serious attention.

(2/7)

en.wikipedia.org/wiki/Risk_ma...

4 months ago 0 0 1 0
Preview
Existential risk from artificial intelligence - Wikipedia

The AI existential risk debate is often framed wrong. People focus on "how likely is this?" But that's not the right question.

(1/7) 🧵

en.wikipedia.org/wiki/Existen...

4 months ago 0 0 1 0
Dolby Barcelona Scientific Paper Award 2025 - Dolby Professional

🏆 It's a pleasure to announce the winner of the Dolby Barcelona Scientific Paper Award 2025: Guillem Cortès Sebastià for PeakNetFP, a scalable neural audio fingerprinting system robust to extreme time stretching.

Details: professional.dolby.com/legal/barcel...

5 months ago 0 1 0 0

Am I missing something?

5 months ago 0 0 0 0
Advertisement

Yet in AI research (which is essentially statistical modeling)we routinely abandon these basic practices. The irony is striking.

5 months ago 0 0 1 0

In other scientific fields (natural and social sciences), proper statistical analysis is fundamental. You simply cannot publish without it.

5 months ago 0 0 1 0

There's also the added problem that metrics often don't correlate with perception. A 0.1 dB SDR improvement might be meaningless perceptually. But this issue has been discussed more often than the statistical rigor problem.

5 months ago 1 0 1 0

❌ Claims of "superior performance" based on point estimates alone

Example: Paper A reports 15.21 dB, Paper B reports 15.01 dB. Is this difference meaningful or just noise? Do those decimal places have any meaning? Usually impossible to tell from the paper.

5 months ago 0 0 1 0

❌ Values without error bars/confidence intervals
❌ Standard deviations sometimes quoted but no uncertainty estimates of means
❌ No significance testing whatsoever
❌ No effect size analysis
❌ No exploratory analysis beyond the mean

5 months ago 0 0 1 0

We're not even applying methods from first-year undergraduate physics—like reporting results with error bars. The problems I regularly see would make any physics professor cringe.

5 months ago 0 0 1 0

Coming from a physics background and being familiar with research methodology in fields like neuroscience, I'm struck by the lack of statistical practices in ML/AI papers. Most papers fail at basic statistical analysis that would be expected in other scientific fields. 🧵

5 months ago 1 0 1 0
Advertisement

At @waspaa.com in Lake Tahoe 🏔️ — great talks, great atmosphere, and maybe a bear or two 🐻.

Happy to present today our work on generative AI for RIR!

6 months ago 0 0 0 0

Dolby Barcelona Scientific Paper Award 2025 - deadline extended to 30th September due to ICASSP paper submission overlap.

Final call!

Details: professional.dolby.com/legal/barcelona-scientific-paper-award-2025/

7 months ago 0 0 0 0

Just 11 days left until the Sept 15 deadline and we've already received some excellent submissions.

Still working on yours? Don't wait: showcase your 2024-2025 sound research now.

7 months ago 0 0 0 1
Post image

New paper!
We're introducing a new way to generate realistic room impulse responses not from room geometry, but by directly controlling acoustic parameters like reverb time and direct-to-reverb ratio.

🔗 Demo: silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136

9 months ago 2 2 1 1

This work was the result of Silvia Arellano's internship in Dolby Barcelona with us.

Come explore the demo here:
🔗 silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136

Feedback & questions welcome!

9 months ago 0 0 0 0
Post image

We explore 4 DAC-based models:
1️⃣ AR w/ cross-attention
2️⃣ AR w/ classifier guidance
3️⃣ MaskGIT w/ adaptive layer norm
4️⃣ Flow matching

The MaskGIT model achieves the best subjective quality (avg. 70 MUSHRA score), beating state of the art comparisons.

9 months ago 0 0 1 0
Post image

Instead of simulating room geometry, we train four different generative model to produce RIRs conditioned on acoustic attributes (T30, T15, EDT, D50, C80, source-receiver distance)

9 months ago 0 0 1 0
Post image

New paper!
We're introducing a new way to generate realistic room impulse responses not from room geometry, but by directly controlling acoustic parameters like reverb time and direct-to-reverb ratio.

🔗 Demo: silviaarellanogarcia.github.io/rir-acoustic/
📄 Paper: arxiv.org/pdf/2507.12136

9 months ago 2 2 1 1
Advertisement
Dolby Barcelona Scientific Paper Award 2025 - Dolby Professional

I'm happy to share that in the Dolby office in Barcelona we are offering an award for outstanding scientific papers in sound research. Open to students and early-career researchers with connections to Catalonia.

Deadline 15th September.

professional.dolby.com/legal/barcel...

10 months ago 0 1 1 1
Preview
Solving Audio Inverse Problems with a Diffusion Model This paper presents CQT-Diff, a data-driven generative audio model that can, once trained, be used for solving various different audio inverse problems in a problem-agnostic setting. CQT-Diff is a neu...

Last winner, Eloi Moliner, pioneered diffusion models in AI for audio restoration. Could you be next?

arxiv.org/abs/2210.15228

10 months ago 0 0 0 0
Dolby Barcelona Scientific Paper Award 2025 - Dolby Professional

I'm happy to share that in the Dolby office in Barcelona we are offering an award for outstanding scientific papers in sound research. Open to students and early-career researchers with connections to Catalonia.

Deadline 15th September.

professional.dolby.com/legal/barcel...

10 months ago 0 1 1 1
Post image

The key issue isn't the most likely outcome — it's the worst-case scenario we must be prepared for.

arxiv.org/abs/2401.02843

10 months ago 0 0 0 0
International Atomic Energy Agency | Atoms for Peace and Development The IAEA is the world's centre for cooperation in the nuclear field, promoting the safe, secure and peaceful use of nuclear technology. It works in a wide range of areas including energy generation, h...

Just as nuclear research is subject to international oversight, frontier AI development should be too. We need strong global regulatory frameworks for models with potentially vast power.

www.iaea.org

10 months ago 0 0 1 0
Preview
Activating AI Safety Level 3 Protections We have activated the AI Safety Level 3 (ASL-3) Deployment and Security Standards described in Anthropic’s Responsible Scaling Policy (RSP) in conjunction with launching Claude Opus 4. The ASL-3 Secur...

Anthropic deserves credit for its serious commitment to LLM safety — but where are similar efforts from other big tech players like Google or OpenAI?

🔗 www.anthropic.com/news/activat...

10 months ago 0 0 1 0