Advertisement ยท 728 ร— 90

Posts by Erin Grant

Post image

Very excited by this year's Analytical Connectionism Summer School!

A dream lineup of speakers on the topic of language acquisition in minds and machines

Bursaries available to cover costs

Aug 17 โ€“ Aug 28, 2026 Gothenburg

Details: www.analytical-connectionism.net//school/2026/

1 week ago 15 2 0 1

Join us! We are opening many postdoc positions both in London and in Gothenburg!

London
๐Ÿ”—www.lesswrong.com/posts/GTt33CasvWjxxazJw/...
๐Ÿ“… deadline: March 26th

Gothenburg
๐Ÿ”— www.chalmers.se/en/about-cha...
๐Ÿ“… deadline: April 1st

1 month ago 6 3 0 0

Excited to launch Principia, a nonprofit research organisation at the intersection of deep learning theory and AI safety.

Our goal is to develop theory for modern machine learning systems that can help us understand complex network behaviors, including those critical for AI safety and alignment.

1

1 month ago 93 28 1 1

#CCN2026 Proceedings submissions are open and due in *two* weeks! Info about how to submit in the thread below๐Ÿ‘‡

Come share your science and hang out in NYC in August. :)

2 months ago 2 1 0 0

@dataonbrainmind.bsky.social starting now in Room 10 with opening remarks from @crji.bsky.social and the first invited talk from @dyamins.bsky.social!

4 months ago 11 3 0 0

Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social + Amii.ca Fellow! ๐Ÿฅณ Recruiting students to develop theories of cognition in natural & artificial systems ๐Ÿค–๐Ÿ’ญ๐Ÿง . Find me at #NeurIPS2025 workshops (speaking coginterp.github.io/neurips2025 & organising @dataonbrainmind.bsky.social)

4 months ago 104 27 4 1
Two posts from Bluesky. The first one shows a figure from a paper published in Nature Scientific Reports full of totally incoherent AI fabricated gibberish words. The other a comment on a recently published paper by eLife discussing the paper and its peer reviews which were published along with the paper.

Two posts from Bluesky. The first one shows a figure from a paper published in Nature Scientific Reports full of totally incoherent AI fabricated gibberish words. The other a comment on a recently published paper by eLife discussing the paper and its peer reviews which were published along with the paper.

Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?

4 months ago 156 69 4 8
Probabilistic ML in scientific pipelines

Probabilistic ML in scientific pipelines

I'm on the academic job market!

I design and analyze probabilistic machine-learning methods---motivated by real-world scientific constraints, and developed in collaboration with scientists in biology, chemistry, and physics.

A few highlights of my research areas are:

5 months ago 38 14 2 0
Post image

Applying to do a postdoc or PhD in theoretical ML or neuroscience this year? Consider joining my group (starting next Fall) at UT Austin!
POD Postdoc: oden.utexas.edu/programs-and... CSEM PhD: oden.utexas.edu/academics/pr...

5 months ago 33 11 1 0
Advertisement
Preview
Postdoctoral Fellow - Large Language Models as Models for Human Development This position is part of the Post Doctoral Fellows Association and has an initial appointment of two years. This position has a comprehensive benefits package. Location - This role is in-person at Nor...

I'm hiring (another) post doc, this time in collaboration with Natalie Brito @nataliebrito.bsky.social at Columbia! We will be exploring some of the characteristics of human development using deep learning models. Email with questions!
iaejup.fa.ocs.oraclecloud.com/hcmUI/Candid...

5 months ago 8 4 0 1

Hoping you find out and share! ๐Ÿค—

6 months ago 0 0 0 0

Congrats Richard!!

6 months ago 1 0 0 0
Postdoctoral Fellow - Language Models and Neuroscience - Careers@UAlberta.ca University of Alberta: Careers@UAlberta.ca

I am hiring a post doc at UAlberta, affiliated with Amii! We study language processing in the brain using LLMs and neuroimaging. Looking for someone with experience with ideally both neuroimaging and LLMs, or a willingness to learn. Email me with Qs
apps.ualberta.ca/careers/post...

6 months ago 14 7 0 1
Preview
Frontiers | Summary statistics of learning link changing neural representations to behavior How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical phy...

Since I'm back on BlueSky - with @frostedblakess.bsky.social and @cpehlevan.bsky.social we wrote a brief perspective on how ideas about summary statistics from the statistical physics of learning could potentially help inform neural data analysis... (1/2)

7 months ago 34 10 1 0
Data on the Brain & Mind

๐Ÿ“ข 10 days left to submit to the Data on the Brain & Mind Workshop at #NeurIPS2025!

๐Ÿ“ Call for:
โ€ข Findings (4 or 8 pages)
โ€ข Tutorials

If youโ€™re submitting to ICLR or NeurIPS, consider submitting here tooโ€”and highlight how to use a cog neuro dataset in our tutorial track!
๐Ÿ”— data-brain-mind.github.io

7 months ago 8 5 0 0

Iโ€™m recruiting committee members for the Technical Program Committee at #CCN2026.

Please apply if you want to help make submission, review & selection of contributed work (Extended Abstracts & Proceedings) more useful for everyone! ๐ŸŒ

Helps to have: programming/communications/editorial experience.

7 months ago 19 14 3 1

arguably the most important component of AI for neuroscience:

data, and its usability

7 months ago 20 2 1 0
Post image

The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!

7 months ago 172 30 4 6
Advertisement

many thanks to my collaborators, @saxelab.bsky.social and especially Lukas :)

7 months ago 2 0 0 0

I like the how Rosa Cao (sites.google.com/site/luosha) & @dyamins.bsky.social speculated about task constraints here (doi.org/10.1016/j.co...). I think the Platonic Representation hypothesis is a version of their argument, for multi-modal learning.

7 months ago 2 0 0 0

Definitely! Task constraints certainly play a role in determining representational structure, which might interact with what we consider here (efficiency of implementation). We don't explicitly study it. Someone should!

7 months ago 1 0 1 0
ICML Poster Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networksICML 2025

Main takeaway: Valid representational comparison relies on implicit assumptions (task-optimization *plus* efficient implementation). โš ๏ธ More work to do on making these assumptions explicit!

๐Ÿง  CCN poster (today): 2025.ccneuro.org/poster/?id=w...

๐Ÿ“„ ICML paper (July): icml.cc/virtual/2025/poster/44890

7 months ago 15 1 0 0

Our theory predicts that representational alignment is consistent with *efficient* implementation of similar function. Comparing representations is ill-posed in general, but becomes well-posed under minimum-norm constraints, which we link to computational advantages (noise robustness).

7 months ago 5 0 1 0
Function-representation dissociation in ReLU networks. (A-B) MNIST representations before/after prediction-preserving reparametrisation. (C) RSM after function-preserving reparametrisation. (D-E) Performance under input/parameter noise for different solution types.

Function-representation dissociation in ReLU networks. (A-B) MNIST representations before/after prediction-preserving reparametrisation. (C) RSM after function-preserving reparametrisation. (D-E) Performance under input/parameter noise for different solution types.

Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).

7 months ago 1 0 1 0
Hidden-layer representations for a semantic hierarchy task. (A) Task structure. (B) Input/target encoding. (C-E) Hidden representations and representational similarity matrices for task-agnostic (C: LSS) vs. task-specific (D: MRNS, E: MWNS) solutions.

Hidden-layer representations for a semantic hierarchy task. (A) Task structure. (B) Input/target encoding. (C-E) Hidden representations and representational similarity matrices for task-agnostic (C: LSS) vs. task-specific (D: MRNS, E: MWNS) solutions.

We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).

7 months ago 3 1 1 0
The solution manifold. (A) Solution manifold for a 3-parameter linear network, showing GLS and constrained LSS, MRNS, and MWNS solutions. (B-E) Input/output weight relationships and parametrisation structure for each solution type.

The solution manifold. (A) Solution manifold for a 3-parameter linear network, showing GLS and constrained LSS, MRNS, and MWNS solutions. (B-E) Input/output weight relationships and parametrisation structure for each solution type.

We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).

7 months ago 2 0 1 0
Task solution hierarchy defined by implicit regularisation objectives.

Task solution hierarchy defined by implicit regularisation objectives.

To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).

7 months ago 4 0 1 0
Example of a failure case. (A) A random walk on the solution manifold of a two-layer linear network reveals that weights can change continuously, inducing changes in the (B) network parametrisation and thus the (C) hidden-layer representations, while preserving the (D) network output.

Example of a failure case. (A) A random walk on the solution manifold of a two-layer linear network reveals that weights can change continuously, inducing changes in the (B) network parametrisation and thus the (C) hidden-layer representations, while preserving the (D) network output.

Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.

(Networks can have the same function with the same or different representation.)

7 months ago 3 0 1 0
Advertisement

Are similar representations in neural nets evidence of shared computation? In new theory work w/ Lukas Braun (lukasbraun.com) & @saxelab.bsky.social, we prove that representational comparisons are ill-posed in general, unless networks are efficient.

@icmlconf.bsky.social @cogcompneuro.bsky.social

7 months ago 72 20 3 0

Co-organized with @susanneharidi.bsky.social, @marcelbinz.bsky.social, Rodrigo Carrasco-Davis, @clementinedomine.bsky.socialโ€ฌ, @eringrant.me, @modirshanechi.bsky.socialโ€ฌ ๐ŸŒณ

7 months ago 2 0 0 0