Check out this cool brain-inspired approach to self-supervised learning from @shahabbakht.bsky.social and crew!
🧠📈 🧪 #NeuroAI #MLSky
Posts by Sungjae Cho
For understanding learning in neocortex 🧠, self-supervised learning 🤖 is interesting, but has several shortcomings.
Seq-JEPA is a step in the right direction. It learns by predicting sensory outcomes from series of interactions. Cool things emerged! 👇with @shahabbakht.bsky.social
#MLSky #NeuroAI
Preprint Alert 🚀
Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?
TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!
🧵 (1/10)
The original work of the backpropagation: Rumelhart, Hinton & Williams, 1986 from PDP, not Nature. The Nature version is a conceptually digested version.
Biologically-constrained networks (for brain models).
Biologically-inspired networks (for AI models).
Writing a mathematical paper requires special skills of copying and pasting equations, not to mix up notations incorrectly. Logical compilers for math equations on the paper must be very useful. We don't run equations.
3) My report must help you comprehend the original proofs and formulation.
2) Most of the literature on these algorithms is about their applications and variants, which do not prove the original algorithms. Few papers have proven them, but methods and techniques are different from original proofs. The original paper provides concise proofs, leading difficulty to follow.
I have posted an arXiv paper (arxiv.org/abs/2501.11341) as a report, which guides the proofs of Lee and Seung (2000)'s multiplicative update algorithms to solve non-negative matrix factorization (NMF).
The brain has certain way to understand the world, as different sentences that have equal information result in different understandability. The order of ideas matters. Their imaginability matters. Their accordance to our life matters.