(1/23) In addition to the new Lady Gaga album "Mayhem," my paper with Manuel Beiran, "Structure of activity in multiregion recurrent neural networks," has been published today.
PNAS link: www.pnas.org/doi/10.1073/...
(see dclark.io for PDF)
An explainer thread...
Posts by Giulio Bondanelli
Yay Harvard!!!!!!! Well done! Other universities should do the same.
www.nytimes.com/2025/04/14/u...
I'm organizing a "Data for Good Rapid Response Team", i.e. a set of people who can be mobilized to work on short data science projects to help various groups and organizations. Sign up here if this is of interest to you, and please share with others with data science skills.
The Institut Pasteur has decided to leave X (formerly Twitter) to join BlueSky! 🦋
The Institut Pasteur made this decision due to several serious issues observed on the X platform since its acquisition. Join us here to continue advocating for science. www.pasteur.fr/en/home/pres...
Excited to see @thetransmitter.bsky.social feature our preprint on how the neural code changes across the cortical hierarchy! A multi-region perspective on categorical selectivity 🧱 and geometric dimensionality 📐
Blueprint thread coming soon— after I am done with panettone digestion 🥮!
Great material here! ✨
What papers do you like that demonstrate the use of 'orthogonal subspaces' for encoding information in neural populations? #neuroskyence #compneuro #neuroAI
We published this a couple of years ago 🙃
elifesciences.org/articles/53151
Population responses in AC across a range of stimuli explored orthogonal subspaces.
(We found it convenient to apply dim. red. before computing the angles to avoid measuring too small angles in shared low-var dimensions)
I would think in Xie et al. the cosines are ranked so that cos(θ1)>cos(θ2) (not stated explicitly). Then cos(θ1) is the first (=max) singular value of the matrix Va.T*Vb, and θ1 the first principal angle.
Equations for different RNN update rules. Equation 1 has a linear decay term plus a term with nonlinearity applied to summed inputs (sum all inputs - recurrent, external, bias). Eq 2 is its discretized form. Equation 3 has linearized dynamics of rnn state x, with a pointwise nonlinearity applied only to the recurrent input, that is, W_rec phi(x). Equation 4 is the same as eq 3 for hidden state or potential x, but the firing rates are additionally obtained after applying the nonlinearity to the hidden state.
Comp #Neuroskyence hivemind - I've noticed multiple eqs for RNN dynamics that vary in interpretation and implications due to placement of nonlinearity. Similar mixture in theory papers... @loradrian.bsky.social @jbarbosa.org @bio-emergent.bsky.social @matthijspals.bsky.social @kenmiller.bsky.social
A screenshot of a paper on bioarxiv illustrating the lack of blue sky share button!
Would you like to see @biorxivpreprint.bsky.social add a share to blue sky button?! I know I would! Share this post to let @richardsever.bsky.social @erictopol.bsky.social and others at bioarxiv know!
Hi 👋 I would love to be added as well! Thanks!
Psychologists and neuroscientists are calling for international pressure towards immediate ceasefire in Israel, Palestine & Lebanon, respect for international humanitarian law, end of the occupation, and release of all hostages.
Read & join us by signing here: tinyurl.com/PsychLetter