Advertisement ยท 728 ร— 90

Posts by Zhipeng Huang

This is beautiful.

5 months ago 5 6 0 0

i'm just attending, will try book earlier next year then :)

5 months ago 0 0 0 0

is there still available ticket by any chance ? lol

5 months ago 1 0 1 0
Post image

Imagine a brain decoding algorithm that could generalize across different subjects and tasks. Today, weโ€™re one step closer to achieving that vision.

Introducing the flagship paper of our brain decoding program: www.biorxiv.org/content/10.1...
#neuroAI #compneuro @utoronto.ca @uhn.ca

6 months ago 71 16 4 0

Big congrats ๏ผ

6 months ago 1 0 0 0

Very happy to see that Pleias multilingual data processing pipelines have contributed to the largest open pretraining project in Europe.

From their tech report: huggingface.co/swiss-ai/Ape...

7 months ago 30 10 2 0
Preview
Brain-wide representations of prior information in mouse decision-making - Nature Brain-wide recordings in mice reveal that prior expectations are distributed through recurrent loops across all levels of cortical and subcortical processing.

Not that it comes as much of a surprise to many of us, but it's worth emphasizing once again - the ๐Ÿ‘ brain ๐Ÿ‘ uses ๐Ÿ‘ distributed ๐Ÿ‘ coding ๐Ÿ‘. ๐Ÿ˜

Two new papers from the #IBL looking at brain-wide activity:

www.nature.com/articles/s41...
www.nature.com/articles/s41...

#neuroscience ๐Ÿงช

7 months ago 128 26 7 3
Video

Stunning cryo-ET from Peijun Zhang lab: Direct visualization of HIV-1 nuclear import!
Hundreds of viral cores captured entering the nucleus. The NPC dilates to let the capsid through. A masterclass in correlative microscopy that makes it quantitative. A leap for structural virology! @emboreports.org

7 months ago 30 13 2 0
Preview
Internship Position on the Lattice Estimator Eamonn and I are looking to hire an intern for four months to work on the Lattice Estimator. The internship will be based at Kingโ€™s College London and is funded by a gift from Zama. We are ideally โ€ฆ

Internship Position on the Lattice Estimator martinralbrecht.wordpress.com/2025/08/27/i...

7 months ago 3 2 0 0
EurIPS Copenhagen 2025 - A NeurIPS-endorsed conference in Europe A NeurIPS-endorsed conference in Europe held in Copenhagen, Denmark

In short, world-class research from NeurIPS accessible in Europe.

EurIPS takes place over 3 days + 2 workshop days, at the same time as NeurIPS in San Diego.

Follow this account for more updates, and see you in wonderful Copenhagen ๐Ÿ“…

eurips.cc

7 months ago 22 8 0 0
Advertisement
Preview
Principles of cotranslational mitochondrial protein import Selective ribosome profiling reveals that nearly 20% of mitochondrial proteins in human cells are imported during translation on cytosolic ribosomes. Cotranslational import requires an N-terminal pres...

Emmanuel Levy @elevylab.bsky.social has joined BlueSky ๐ŸŒŸ with a fantastic Cell paper with Shu-ou Shan, showing the interactome of the TOM complex and how cotranslational mito import prioritizes large globular domains. Beautiful science!
Give him a warm welcome ๐ŸŽ‰
link: www.cell.com/cell/fulltex...

8 months ago 39 13 1 1
Post image

EurIPS includes a call for both Workshops and Affinity Workshops!
We look forward to making #EurIPS a diverse and inclusive event with you.

The submission deadlines are August 22nd, AoE.

More information at:
eurips.cc/call-for-wor...
eurips.cc/call-for-aff...

8 months ago 37 21 0 3

Excited to be at ACL! Join us at the Table Representation Learning workshop tomorrow in room 2.15 to talk about tables and AI.

We also present a paper showing the sensitivity of LLMs in tabular reasoning to e.g. missing vals and duplicates, by @cowolff.bsky.social at 16:50: arxiv.org/abs/2505.07453

8 months ago 6 3 1 0
Preview
Bootstrapped Private GenAI Startup Hits $1M Annual Revenue, Launches Helix 2.0 The people behind the story, how agentic AI is changing and why we don't want a sales call with you

Bootstrapped a Private GenAI Startup to $1M revenue, AMA

blog.helix.ml/p/bootstrapp...

8 months ago 4 1 0 0
What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns
Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

What do representations tell us about a system? Image of a mouse with a scope showing a vector of activity patterns, and a neural network with a vector of unit activity patterns Common analyses of neural representations: Encoding models (relating activity to task features) drawing of an arrow from a trace saying [on_____on____] to a neuron and spike train. Comparing models via neural predictivity: comparing two neural networks by their R^2 to mouse brain activity. RSA: assessing brain-brain or model-brain correspondence using representational dissimilarity matrices

In neuroscience, we often try to understand systems by analyzing their representations โ€” using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:

8 months ago 171 53 5 1
Preview
The Open DAC 2025 Dataset for Sorbent Discovery in Direct Air Capture Identifying useful sorbent materials for direct air capture (DAC) from humid air remains a challenge. We present the Open DAC 2025 (ODAC25) dataset, a significant expansion and improvement upon ODAC23...

It was great to work on the ODAC25 paper with the Meta FAIR Chemistry and Georgia Tech. A leap forwards in modelling direct air carbon capture with metal organic frameworks, with much better data and larger models.

Paper: arxiv.org/abs/2508.03162
Data and models: huggingface.co/facebook/ODA...

8 months ago 9 4 1 0
Preview
Serotonin shapes the temporal window for associative fear learning Fear learning is a critical adaptive mechanism that enables the association of an environmental cue (the conditioned stimulus, CS) with a potential threat (the unconditioned stimulus, US), even when t...

1/ Excited to share a new preprint!
Our latest study uncovers how serotonin precisely controls the โ€œtime windowโ€ for fear learning, ensuring that our brains link cues (CS) & threats (US) only when itโ€™s adaptive.
#Neuroscience #FearLearning
www.biorxiv.org/content/10.1...

8 months ago 30 8 2 0

is there a recording on YouTube?

8 months ago 0 0 1 0
Abstract. The argument size of succinct non-interactive arguments (SNARG) is a crucial metric to minimize, especially when the SNARG is deployed within a bandwidth constrained environment.

We present a non-recursive proof compression technique to reduce the size of hash-based succinct arguments. The technique is black-box in the underlying succinct arguments, requires no trusted setup, can be instantiated from standard assumptions (and even when Pโ€„=โ€„NP!) and is concretely efficient.

We implement and extensively benchmark our method on a number of concretely deployed succinct arguments, achieving compression across the board to as much as 60% of the original proof size. We further detail non-black-box analogues of our methods to further reduce the argument size.

Abstract. The argument size of succinct non-interactive arguments (SNARG) is a crucial metric to minimize, especially when the SNARG is deployed within a bandwidth constrained environment. We present a non-recursive proof compression technique to reduce the size of hash-based succinct arguments. The technique is black-box in the underlying succinct arguments, requires no trusted setup, can be instantiated from standard assumptions (and even when Pโ€„=โ€„NP!) and is concretely efficient. We implement and extensively benchmark our method on a number of concretely deployed succinct arguments, achieving compression across the board to as much as 60% of the original proof size. We further detail non-black-box analogues of our methods to further reduce the argument size.

zip: Reducing Proof Sizes for Hash-Based SNARGs (Giacomo Fenzi, Yuwen Zhang) ia.cr/2025/1446

8 months ago 4 1 0 1
Defining and quantifying compositional structure What is compositionality? For those of us working in AI or cognitive neuroscience this question can appear easy at first, but becomes increasingly perplexing the more we think about it. We arenโ€™t shor...

Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...

8 months ago 18 6 1 1
Advertisement

Me too !

8 months ago 2 0 0 0
Preview
Attractors are usually not mechanisms The mathematical objects can not be. And the "attractor models" have not been established as mechanisms in mammals

Attractors are usually not mechanisms - new blog post: open.substack.com/pub/kording/...

9 months ago 150 33 20 9
Preview
Log-Normal Multiplicative Dynamics for Stable Low-Precision Training of Large Networks Studies in neuroscience have shown that biological synapses follow a log-normal distribution whose transitioning can be explained by noisy multiplicative dynamics. Biological networks can function sta...

Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.

This group got it working!

arxiv.org/abs/2506.17768

May be a great way to reduce AI energy use!!!

#MLSky ๐Ÿงช

9 months ago 39 13 3 0
Post image Post image

๐Ÿ‘จโ€๐ŸŽ“๐Ÿงพโœจ#icml2025 Paper: TabICL, A Tabular Foundation Model for In-Context Learning on Large Data
With Jingang Qu, @dholzmueller.bsky.social, and Marine Le Morvan

TL;DR: a well-designed architecture and pretraining gives best tabular learner, and more scalable
On top, it's 100% open source
1/9

9 months ago 50 15 1 0
Post image

(1/3) Excited to introduce our new GRAB sensors for a series of steroid hormones! These tools enable real-time detection of steroid hormone dynamics in vivo๐Ÿญ๐Ÿง . Happy to share these sensors and welcome any feedback! Please contact yulonglilab2018@gmail.com for information.

9 months ago 22 10 1 0
Preview
Sensory responses of visual cortical neurons are not prediction errors Predictive coding is theorized to be a ubiquitous cortical process to explain sensory responses. It asserts that the brain continuously predicts sensory information and imposes those predictions on lo...

1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:

www.biorxiv.org/content/10.1...

I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.

But...

๐Ÿง ๐Ÿ“ˆ ๐Ÿงช

9 months ago 220 79 13 5
Preview
Functional characterisation of rare variants in genes encoding the MAPK/ERK signalling pathway identified in long-lived Leiden Longevity Study participants - GeroScience Human longevity, which is coupled to compression of age-related disease, is a heritable trait. However, only few common genetic variants have been linked to longevity, suggesting that rare, family-spe...

I am very proud to present you the first paper from the main research line of my group at the @mpiage.bsky.social in collaboration with my current group @molepi.bsky.social @bds-lumc.bsky.social at the LUMC, which is now published in @geroscience.bsky.social!

link.springer.com/article/10.1...

10 months ago 16 12 2 1
Preview
Dopamine encodes deep network teaching signals for individual learning trajectories Longitudinal tracking of long-term learning behavior and striatal dopamine reveals that dopamine teaching signals shape individually diverse yet systematic learning trajectories, captured mathematical...

Super excited to see this paper from Armin Lak & colleagues out! (I've seen @saxelab.bsky.social present it before.)

www.cell.com/cell/fulltex...

tl;dr: The learning trajectories that individual mice take correspond to different saddle points in a deep net's loss landscape.

๐Ÿง ๐Ÿ“ˆ ๐Ÿงช #NeuroAI

9 months ago 84 17 5 1
Advertisement
Preview
Mitochondrial origins of the pressure to sleep - Nature Research on Drosophila neurons shows links between the need to sleep and aerobic metabolism, indicating that the pressure to sleep may have a mitochondrial origin.

Mitochondrial origins of the pressure to sleep
www.nature.com/articles/s41...

9 months ago 5 5 0 0
Preview
Adversarial testing of global neuronal workspace and integrated information theories of consciousness - Nature Multimodal results (iEEG, fMRI and MEG) of predictions from integrated information theory and global neuronal workspace theory align with some predictions of both theories on visual consciou...

The 1st major study from @arc-cogitate.bsky.social
is out today in @nature.comโ€”a landmark collaboration testing theories of consciousness through rigorous, preregistered science. Data & tools shared openly.
Contributed from @mcgillumedia.bsky.social @theneuro.bsky.social
nature.com/articles/s41...

11 months ago 34 14 0 1