We've heard you! Time after ICASSP is feeling tight for many, and thanks to a very strong reviewer pool, we can reduce the review load and shorten the review period.
We are thus happy to announce a 1 week extension🤗
New #WASPAA2025 deadlines:
April 30: First submission
May 7: Final submission
Posts by Zachary Novack
Titans for Titans when @urinieto.bsky.social 🤣
Hyped that 3/3 papers w/the folks
@ucsd-musaic.bsky.social
are accepted at #ICASSP2025!
PDMX: Public Domain Symbolic Music arxiv.org/abs/2409.10831
CoLLAP: Long-Context CLAP (~5 min) arxiv.org/abs/2410.02271
FUTGA-MIR: long music understanding for MIR tasks (arxiv soon)
Next stop, India!🇮🇳
new paper! 🗣️Sketch2Sound💥
Sketch2Sound can create sounds from sonic imitations (i.e., a vocal imitation or a reference sound) via interpretable, time-varying control signals.
paper: arxiv.org/abs/2412.08550
web: hugofloresgarcia.art/sketch2sound
Blog post link: diffusionflow.github.io/
Despite seeming similar, there is some confusion in the community about the exact connection between the two frameworks. We aim to clear up the confusion by showing how to convert one framework to another, for both training and sampling.
We just created a Bluesky starter pack featuring people and groups working at the intersection of AI and music, covering both symbolic and audio approaches. Let us know if you'd like to be added or removed!
go.bsky.app/PBvFCxa
this is sick! would love to be added, as a controllable + accelerated diffusion fan (mostly for audio/music) 🎸
This is awesome! Could I be added?