Advertisement Β· 728 Γ— 90

Posts by Viet Anh Khoa Tran

At #NeurIPS in San Diego and interested in #NeuroAI? Today in theπŸ•11am-2pm session, I will present our work on brain-inspired continual learning via TMCL (πŸ“Poster 2206), with Emre Neftci and @dlg-ai.bsky.social.

Say hi or DM me if you want to chat about continual, local or modular learning!

4 months ago 6 1 0 0
Preview
Paper2Song - Hear Every NeurIPS 2025 Paper as a Song Hear research differently. 5000+ NeurIPS 2025 papers transformed into songs. Search by topic, discover ideas through music, and share what inspires you.

The Dendritic Learning Group at @fz-juelich.de is now on Bluesky. What better way to show what we do than to share this AI-generated song about our NeurIPS paper on brain-inspired continual learning?

paper2song.com/virtual/2025...

4 months ago 6 2 0 0

πŸ‘‹ At #SfN25? Interested in how head-direction cells anchor to visual cues?

🧠 Come visit me Tuesday Morning (8am - 12pm)
πŸ“ Poster RR15

showing work w/ @adrian-du.bsky.social on parallax in PoSub and what we learn from it for visual cue integration. Come say hi or DM for a coffee chat! β˜•οΈ #Neurosky

5 months ago 6 3 0 0

No, the loss is complementary to the traditional view-inv. contrastive loss, and we also use augmentations for the modulation-inv. positives.

And this is with end-to-end backprop (for now).

5 months ago 0 0 1 0

Thanks Guillaume!

Exactly, only the ff params are learned during contrastive learning, and we "replay" different, frozen modulations for different positives, as we expect that an unlabeled class-c sample would yield an is-c positive under modulation c, and a is-not-c' positive under modulation c'.

5 months ago 1 0 1 0

Our preprint has been accepted at #NeurIPS2025! πŸŽ‰

I will be presenting TMCL in just two weeks at the #BernsteinConference. Hope to see some of you there! @bernsteinneuro.bsky.social

Many thanks to my advisor Willem Wybo, and to Emre Neftci for the great support.

7 months ago 12 2 0 0
(SW2025) From molecules to networks: The dendritic processes that shape learning and memory – Bernstein Netzwerk Computational Neuroscience

BjΓΆrn Kampa, Willem Wybo and I have organized an exciting symposium at the upcomig #BernsteinConference:
bernstein-network.de/bernstein-co...
Everything about Dendrites: Check it out!

7 months ago 2 2 0 0
Preview
Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning Biological brains learn continually from a stream of unlabeled data, while integrating specialized information from sparsely labeled examples without compromising their ability to generalize. Meanwhil...

This is research from the new Dendritic Learning Group at PGI-15 (β€ͺ@fz-juelich.de‬).
A huge thanks to my supervisor Willem Wybo and our institute head Emre Neftci!
πŸ“„ Preprint: arxiv.org/abs/2505.14125
πŸš€ Project page: ktran.de/papers/tmcl/

Supported by (@fzj-jsc.bsky.social) and WestAI.
(6/6)

10 months ago 4 0 1 0
Advertisement

This research opens up an exciting possibility: predictive coding as a fundamental cortical learning mechanism, guided by area-specific modulations that act as high-level control over the learning process. (5/6)

10 months ago 2 0 1 0
Post image

Furthermore, we can dynamically adjust the stability-plasticity trade-off by adapting the strength of the modulation invariance term. (4/6)

10 months ago 2 0 1 0
Post image

Key finding: With only 1% labels, our method outperforms comparable continual learning algorithms both on the continual task and when transferred to other tasks.
Therefore, we continually learn generalizable representations, unlike conventional, class-collapsing methods (e.g. Cross-Entropy). (3/6)

10 months ago 2 0 1 0
Post image

Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)

10 months ago 5 0 1 0
Post image

New #NeuroAI preprint on #ContinualLearning!

Continual learning methods struggle in mostly unsupervised environments with sparse labels (e.g. parents telling their child the object is an 'apple').
We propose that in the cortex, predictive coding of high-level top-down modulations solves this! (1/6)

10 months ago 22 6 1 2
Post image

Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)

10 months ago 0 0 0 0