At #NeurIPS in San Diego and interested in #NeuroAI? Today in theπ11am-2pm session, I will present our work on brain-inspired continual learning via TMCL (πPoster 2206), with Emre Neftci and @dlg-ai.bsky.social.
Say hi or DM me if you want to chat about continual, local or modular learning!
Posts by Viet Anh Khoa Tran
The Dendritic Learning Group at @fz-juelich.de is now on Bluesky. What better way to show what we do than to share this AI-generated song about our NeurIPS paper on brain-inspired continual learning?
paper2song.com/virtual/2025...
π At #SfN25? Interested in how head-direction cells anchor to visual cues?
π§ Come visit me Tuesday Morning (8am - 12pm)
π Poster RR15
showing work w/ @adrian-du.bsky.social on parallax in PoSub and what we learn from it for visual cue integration. Come say hi or DM for a coffee chat! βοΈ #Neurosky
No, the loss is complementary to the traditional view-inv. contrastive loss, and we also use augmentations for the modulation-inv. positives.
And this is with end-to-end backprop (for now).
Thanks Guillaume!
Exactly, only the ff params are learned during contrastive learning, and we "replay" different, frozen modulations for different positives, as we expect that an unlabeled class-c sample would yield an is-c positive under modulation c, and a is-not-c' positive under modulation c'.
Our preprint has been accepted at #NeurIPS2025! π
I will be presenting TMCL in just two weeks at the #BernsteinConference. Hope to see some of you there! @bernsteinneuro.bsky.social
Many thanks to my advisor Willem Wybo, and to Emre Neftci for the great support.
BjΓΆrn Kampa, Willem Wybo and I have organized an exciting symposium at the upcomig #BernsteinConference:
bernstein-network.de/bernstein-co...
Everything about Dendrites: Check it out!
This is research from the new Dendritic Learning Group at PGI-15 (βͺ@fz-juelich.deβ¬).
A huge thanks to my supervisor Willem Wybo and our institute head Emre Neftci!
π Preprint: arxiv.org/abs/2505.14125
π Project page: ktran.de/papers/tmcl/
Supported by (@fzj-jsc.bsky.social) and WestAI.
(6/6)
This research opens up an exciting possibility: predictive coding as a fundamental cortical learning mechanism, guided by area-specific modulations that act as high-level control over the learning process. (5/6)
Furthermore, we can dynamically adjust the stability-plasticity trade-off by adapting the strength of the modulation invariance term. (4/6)
Key finding: With only 1% labels, our method outperforms comparable continual learning algorithms both on the continual task and when transferred to other tasks.
Therefore, we continually learn generalizable representations, unlike conventional, class-collapsing methods (e.g. Cross-Entropy). (3/6)
Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)
New #NeuroAI preprint on #ContinualLearning!
Continual learning methods struggle in mostly unsupervised environments with sparse labels (e.g. parents telling their child the object is an 'apple').
We propose that in the cortex, predictive coding of high-level top-down modulations solves this! (1/6)
Feedforward weights learn via view-invariant self-supervised learning, mimicking predictive coding. Top-down class modulations, informed by new labels, orthogonalize same-class representations. These are then consolidated into the feedforward pathway through modulation invariance. (2/6)