Advertisement · 728 × 90

Posts by Supanut Thanasilp

If you are looking for something fun to read early into the week, here is our recent fun work studying different initialization strategies in quantum generative models. That and the meme is great !

4 weeks ago 1 0 0 0

Please send in your abstract, register, and join us. We look forward to seeing you there!

1 month ago 0 0 0 0

Whether your focus is quantum algorithms and quantum machine learning, error correction .. etc, we welcome your contributions. Don’t miss out on coming to Thailand and present your work, seeking new collaborations, and getting inspired - all while enjoying a relaxed, beachside scientific environment

1 month ago 0 0 1 0

Zoë Holmes (EPFL)
Maria Schuld
Hakan Tureci (Princeton)
Zoltán Zimborás (University of Helsinki)
Francesco Tacchino (IBM Research - Zurich)
Mio Murao & Ryuji Takagi (The University of Tokyo)
Martin Larocca (LANL)
Kavan Modi (SUTD)
.
and many more leading minds from the global quantum community !

1 month ago 0 0 1 0
Post image

🚨 URGENT REMINDER: The deadline for extended abstract submission for contributed talk (and abstract submission for poster) is this Friday, Feb 27 ! [BUT realistically Supanut will only check on Monday]

We are excited to feature an incredible lineup of invited speakers, including:

1 month ago 0 0 1 0
Preview
Siam Quantum Science and Technology (SQST 2026) The 2nd International Conference on Siam Quantum Science and Technology

Last call for abstracts! Join us for Quantum Information by the beach in Thailand ⚛️ 🏖️🇹🇭

The 2nd International Conference on Siam Quantum Science and Technology (SQST 2026), happening May 18 – 21, 2026, in beautiful Jomtien, Chonburi, Thailand! 🌊

🔗Abstract Submission& Further Info: www.sqst2026.org

1 month ago 5 2 1 1

If you ever wonder during the night whether you have forgotten the effect of shot-noise in your BP-free strategy analysis ... maybe this could help 😁 Also, congrats to @reyhanehaghaeisaem.bsky.social for her first work 🙌🥳

8 months ago 3 0 0 0

Congrasts Kasidit on his first arxiv 🙌🥳 such a talented and hard working master student. He's sure going to do amazing things in the quantum world ⚛️

9 months ago 1 0 0 0
Advertisement
Post image

Thanks so much to my co-authors Weijie Xiong @qzoeholmes.bsky.social @aangrisani.bsky.social Yudai Suzuki @thipchotibut.bsky.social It's real fun to work with you all 😃🙌

Also, special thanks to @mvscerezo.bsky.social Martin Larocca for their valuable insight on correlated Haar random unitaries 🌮

11 months ago 1 0 0 0

So yes, big question for future QRP design: how to pick your circuit depth or interaction time so that you remain powerful without going full random.

You want that “just right” level of chaos: enough to get expressive states, not so much that it all washes out.

11 months ago 0 0 1 0

Episode 4: New Hope

Not everything is gloom and doom. We found that for moderate scrambling (like shallow random circuits or chaotic Ising with short evolution), you don’t get lethal exponential concentration.

11 months ago 0 0 1 0
Preview
a close up of thanos ' face in avengers infinity war . ALT: a close up of thanos ' face in avengers infinity war .

Episode 3: Noise erases memo...

We also studied QRP under local unital or non-unital noise. While there are work that argue dissipation as a resource for QRP, we prove noise also forces your reservoir to forget states from the distant past exponentially quickly

11 months ago 0 0 1 0
Post image

Episode 2: Oh what ! I forgot now

We prove that in extreme-scrambling QRPs, old inputs or initial states get forgotten exponentially fast (in both time steps and system size !). Too much scrambling -> you effectively “MIB” zap each past input.

11 months ago 0 0 1 0

Hence our new results show that, while chaotic (extreme-scrambling) reservoirs are fine for processing information in small setups as people have studied, they suffer from scalability issue to larger models doomed by their own chaoticity.

11 months ago 0 0 1 0

Episode 1: Scalability barrier

Based on the unrolled form, we prove the exponential concentration of QRP output. In a large scale setting, the trained QRP model becomes input-insensitive leading to poor generalization despite trainability guarantee.

11 months ago 0 0 1 0
Advertisement
Post image

To address this challenge, we apply tensor-diagram approaches to unroll multi-step QRP into a single high-moment Haar integral on a larger dimension amenable for scalability and memory analysis.

11 months ago 0 0 1 0

Episode 0: Temporal correlation hinders standard analytical techniques.

While related techniques already establish scalability barriers for other quantum models, the QRP protocol is much more demanding: a fixed reservoir repeatedly interleaves with a stream of input time-series.

11 months ago 0 0 1 0
Post image

Our key messages can be summarized as

🎯 Big scrambling in quantum reservoirs helps at small sizes but kills input-sensitivity at large scale
🎯 Memory of older states decays exponentially (in both time steps and system size !)
🎯 Noise can make us forget even faster

11 months ago 0 0 1 0
Post image

The QRP model processes input time series of quantum states. Here we model the extreme scrambling reservoir as an instance drawn from a high-order design unitary ensemble.

11 months ago 0 0 1 0
Post image

Once upon a time a myth in Quantum Reservoir Processing (QRP) goes by “more chaos = richer feature map = better”

Doomed by their own chaotic dynamics, QRP may not scale in the extreme scrambling limit.

Check out our new Star Wa… I mean paper on arxiv: scirate.com/arxiv/2505.1...

11 months ago 7 1 1 2