hate it when I leave my bread out too long and it develops consciousness
Posts by Mashbayar Tugsbayar
I've gotten unstuck multiple times on my own while writing a big email asking for help and explaining why I'm stuck. Whereas in high speed communication, I feel like I'm always half thinking about etiquette and keeping up instead of the problem
Today I received a note from a grad student who lives in Tehran. Her note gives you firsthand experience of what it’s like to live in a city that is being bombed, and what it’s like to be young and feel despair about your future.
rezashadmehr.blogspot.com/2026/03/hope...
The revised version of our paper on the impact of top-down feedback is now out @elife.bsky.social:
doi.org/10.7554/eLif...
tl;dr: we show that using human-brain-like feedback/anatomy in a deep RNN leads to human-like visual biases!
This work was led by @tmshbr.bsky.social
#NeuroAI 🧠📈 🧪
I’m grateful to share that our paper has been published in Nature. This work formed the core of my PhD research at McGill University.
We show that hippocampal neurons that initially encode reward progressively reorganize to reflect predictive representations of reward during learning.
Our paper on the "Oneirogen hypothesis" is now up in its revised form on eLife!
This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs.
elifesciences.org/reviewed-pre...
🧠📈 🧪
Are you thinking about doing neuroscience outreach but want to make it more exciting or hands on?
Check out RetINaBox! (A collab led by the Trenholm lab)
We tried to bring the experience of experimental neuroscience to a classroom setting:
www.eneuro.org/content/13/1...
#neuroscience 🧪
🚨 New preprint alert!
🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈
A 🧵:
tinyurl.com/yr8tawj3
🧠🤖 Computational Neuroscience summer school IMBIZO in Cape Town is open for applications again!
💻🧬 3 weeks of intense coursework & projects with support from expert tutors and faculty
📈Apply until July 1st!
🔗https://imbizo.africa/
Want to spend 3 weeks in South Africa for an unforgettable summer school experience? Imbizo 2026 (imbizo.africa) student applications are OPEN! Lectures, new friends, and Noordhoek beach await. Apply by July 1!
More info and apply: imbizo.africa/apply/
#Imbizo2026 #CompNeuro
I love ResNet too, but I'm floored they're cited more than transformers, CNNs and the DSM V!
The model uses ReLU activation like standard DNNs and doesn’t spike. The way we modeled it, feedback would provide a very small amount of driving input but otherwise just gain-modulate neurons already activated by feedforward input.
Last but not least, thank you to @tyrellturing.bsky.social and @neuralensemble.bsky.social!
We'd like to thank @elife.bsky.social and the reviewers for a very constructive review experience. As well, thanks to our funders, in particular HIBALL, CIFAR, and NSERC. This work was supported with computational resources by @mila-quebec.bsky.social and the Digital Research Alliance of Canada.
These results show that modulatory top-down feedback has unique computational implications. As such, we believe that top-down feedback should be incorporated into DNN models of the brain more often. Our code base makes that easy!
We found that top-down feedback, as implemented in our models, helps to determine the set of solutions available to the networks and the regional specializations that they develop.
To summarize, we built a codebase for creating DNNs with top-down feedback, and we used it to examine the impact of top-down feedback on audio-visual integration tasks.
The models were then trained to identify either the auditory or visual stimuli based on an attention cue. The visual bias not only persisted, but helped the brainlike model learn to ignore distracting audio more quickly than other models.
We found that the brain-based model still had a visual bias even after being trained on auditory tasks. But, this bias didn’t hamper the model’s overall performance, and it mimics a consistently observed human visual bias (Posner et al 1974)
Conversely, when trained on a similar set of auditory categorization tasks, the human brain-based model was the best at integrating helpful visual information to resolve auditory ambiguity.
Interestingly, compared to other models, the human brain-based model was particularly proficient at ignoring irrelevant audio stimuli that didn’t help to resolve ambiguities.
To test the impact of different anatomies of modulatory feedback, we compared the performance of a model based on human anatomy with identically sized models with different configurations of feedback/feedforward connectivity.
As an initial test, we wanted to see how using modulatory feedback could impact computation. To do this, we built an audio-visual model, based on human anatomy from the BigBrain and MICA-MICs datasets, and trained it to classify ambiguous stimuli.
Each brain region is a recurrent convolutional network, and can receive two different types of input: driving feedforward and modulatory feedback. With this code, users can input macroscopic connectivity to build anatomically constrained DNNs.
To model top-down feedback in neocortex, we built a freely available codebase that can be used to construct multi-input, topological, top-down and laterally recurrent DNNs that mimic neural anatomy. (github.com/masht18/conn... )
What does it mean to have “biologically-inspired top-down feedback”? In the brain, feedback does not drive pyramidal neurons directly, but it modulates the feedforward signal (both multiplicatively and additively), as described in Larkum et al 2004.
Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈
Our new paper explores this: elifesciences.org/reviewed-pre...
Excited to share our new pre-print on bioRxiv, in which we reveal that feedback-driven motor corrections are encoded in small, previously missed neural signals.
Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪
🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep...
🧵👇[1/6]
#DeepLearning
At #Cosyne2025? Come by my poster today (3-047) to hear how sequential predictive learning produces a continuous neural manifold with the ability to generate replay during sleep, and spatial representations that "sweep" ahead to future positions. All from sensory information alone!