UCSB CS is shining at #ICLR2026!
Multiple faculty papers were accepted (including one oral) spanning AI alignment, LLM safety, software benchmarking, and more. Congrats to all involved!
www.linkedin.com/feed/update/...
#UCSB #ComputerScience #AIResearch #LLM #VideoGeneration #AIBenchmarks
Posts by Michael Beyeler
Glad to see these two projects from @bionicvisionlab.org accepted to #IEEE #EMBC2026 (@embs.org)! 👁️🧠🧪
A nice pairing: one paper is about the codebook, the other about the system.
Both are aimed at making future artificial vision systems more usable in practice
For #NSF and #NIH watchers, Grant Witness now has interactive data on numbers of grants and total funding obligations, broken down by institute and directorate, new awards and non-competitive renewals.
The stranglehold on new awards is still a disaster.
grant-witness.us/funding_curv...
New @annualreviews.bsky.social #neuroscience article 👁️🧠:
bionicvisionlab.org/publications...
w/ @crisniell.bsky.social, @michaelgoard.bsky.social, @spencerlaveresmith.bsky.social
Grateful to be part of this collaboration & learn from such a sharp group while rethinking vision in natural settings!
Save the date 👁🧠🧪
Optica Fall Vision Meeting (FVM) 2026
📅 Sep 24–27, 2026
📍 University of Rochester, NY (@cvsuor.bsky.social)
A single-track meeting built for depth, discussion, and community in #VisionScience.
#OpticaFVM #AcademicSky #neuroskyence #Vision #perception #eye #ophthalmology
[1-213] Control of electrically evoked neural activity in human visual cortex using deep learning. Thursday, March 12, 20:30 - 23:30
[1-132] Predictive models trained on natural behavior recover cell- and state-dependent tuning in mouse V1. Thursday, March 12, 20:30 - 23:30
En route to #Cosyne2026! 🧠🧪🇵🇹
@bionicvisionlab.org is represented with 2 projects:
- control of electrically evoked activity in human V1
- predictive model of mouse V1 recovers cell- and state-dependent tuning
Check out our posters on Thursday!
#CompNeuroSky #NeuroSkyence
Will AI subsume computer science?
“I think [this question] gets the relationship backwards,” says my faculty colleague, Arpit Gupta.
Full read: sites.cs.ucsb.edu/~arpitgupta/...
#AI #ArtificialIntelligence #CS #Science #AcademicSky #ResearchSky #STEM
What happens when a neural network controls electrical stimulation delivered directly to the brain?
In our new JNE paper we answer an important question: how do we know these models are safe?
bionicvisionlab.org/publications...
Full details in the thread below 👇
#BionicVision #NeuroTech #BCI
Most transfer learning assumes shared data, tasks, or domains.
BIRD shows you can transfer behavior itself even when those assumptions break.
All details here:
arxiv.org/abs/2505.23933
#KnowledgeDistillation #Robustness #MachineLearning #AIResearch #ResponsibleAI
Two-panel schematic illustrating the BIRD framework. Left panel shows independent pre-training of a teacher and a student network on different datasets, each optimized with its own task loss. Right panel shows representation-structure distillation: selected intermediate layers from teacher and student are compared via a representation loss, which aligns the geometry of their internal activations while the student is still trained on its own task loss. A snowflake icon indicates the teacher is frozen. The diagram emphasizes that behavior is transferred by aligning internal representation structure rather than outputs or shared data.
We introduce BIRD: Behavior Induction via Representation-structure Distillation.
Instead of transferring outputs, BIRD aligns the geometry of internal representations between teacher and student, enabling weak → strong generalization.
#KnowledgeDistillation #TransferLearning #Robustness
What if your strongest #ML model is brittle at one thing that really matters?
Can it learn that behavior from a weaker but specialist model, even when they share no task, no data, and no architecture?
My student Galen Pogoncheff explored this in our #ICLR2026 paper:
👉 arxiv.org/abs/2505.23933
Flyer for the UCSB CRML Agentic AI Summit 2026. Friday, January 23, 2026. 8:00 AM in Henley Hall 1010. Keynote speakers: Sujith Ravi (VP GenAi, Oracle), Jiantao Jiao (Director AI, Nvidia), Murphy Niu (UCSB), Diyi Yang (Stanford), Daniel Martin (UCSB). Industry talks: Ang Li (CEO, Simular), Zackary Glazewski (Founding AI Engineer, ChipAgents), Eser Kandogan (Principal Research Engineer, Megagon Labs) AI faculty highlights: Eric Wang, Yuheng Bu, Michael Beyeler, James Preiss, Miguel Eckstein
Join us Jan 23 for the inaugural CRML Agentic AI Summit at @ucsb.bsky.social.
Researchers, industry, and students exploring how agentic AI drives discovery and real-world impact.
Free to attend, limited space: ml.ucsb.edu/events/summi...
#AgenticAI #ResponsibleAI #AIResearch
As #neurotechnology scales toward high-resolution implantable devices, new challenges emerge: how will users calibrate visual implants with thousands of channels?
Learn how by reading our paper, co-first-authored with Dr. Xing Chen, now published at Brain Stimulation!
tinyurl.com/Large-scale-...
Can your AI beat a mouse? This is happening Sunday! robustforaging.github.io NeurIPS workshop 11 to 2 California time on Zoom!
@mbeyeler.bsky.social
@sinzlab.bsky.social
@ninamiolane.bsky.social
@crisniell.bsky.social
@mariusschneider.bsky.social
J. Canzano, Y. Hou, J. Peng, et al.
#NeurIPS2025
Grateful to the organizing team: @mariusschneider.bsky.social, @jingpeng.bsky.social, Y Hou, L Herbelin, J Canzano, @spencerlaveresmith.bsky.social.
👏🙏🙌 Special thanks to MS, YH, JP for daily work behind the scenes (at the expense of their own research). The challenge would not exist without them!
Headshots, names, and talk titles for the 3 keynote speakers: 1. Fabian Sinz, University of Tübingen: Foundation models for mouse vision 2. Nina Miolane, UC Santa Barbara: Geometric approaches to neural activity prediction 3. Cris Niell University of Oregon: Visual processing in freely moving mice
Next: Join our NeurIPS workshop on Dec 7, 2025, 11 to 2 PT on Zoom!
Hear from top competitors and our 3 keynote speakers:
- @sinzlab.bsky.social
- @ninamiolane.bsky.social
- @crisniell.bsky.social
More info: robustforaging.github.io/workshop
#NeurIPS2025 #Neuroscience #AI
Top teams:
🥇 371333_HCMUS_TheFangs (ASR 0.968, MSR 0.940, Score 0.954)
🥈 417856_alluding123 (ASR 0.864, MSR 0.650, Score 0.757)
🥉 366999_pingsheng-li (ASR 0.802, MSR 0.670, Score 0.736)
Full leaderboard: robustforaging.github.io/leaderboard/
#NeurIPS2025 #Neuroscience #AI
🎉 Mouse vs AI #NeurIPS2025 Challenge 2025
The first year was a great success:
🤖 290 submissions
👥 22 teams
🌎 7 countries
robustforaging.github.io
A huge thank you to all who participated!👏
This was our first attempt at a global competition built around real mouse behavior and visual robustness
Presenting “Human in the loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortex” again this afternoon at #SfN!!
Come discuss!
An amazing collaboration between the Biomedical Neuroengineering group at UMH and @bionicvisionlab.org
Screenshot of SfN's grad school fair - highlighted is Booth 66
DYNS logo, with text: An interdisciplinary program focused on the study of how the nervous system generates perception, behavior and cognition.
Curious about the Dynamical Neuroscience #PhD Program at @ucsantabarbara.bsky.social? Come find us at the #SfN2025 Grad School Fair (Both 66)! 🧠🧪
More info at www.dyns.ucsb.edu.
#AcademicSky #Neuroscience #compneurosky
If you're at #SfN25, come chat with us about subretinal implants this afternoon! Poster 122.22, presented by PhD student Emily Joyce
I will be presenting the poster “Human-in-the-loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortex” at the Early Career Poster Session #SfN as a TPDA awardee!
Nov. 15, 2025
18:45–20:45 (PT)
Poster: G5
SDCC Halls C–H
Come discuss!
"In February 2024, then–UC President Michael Drake announced all employee computers connected to university networks would be required to install Trellix by May 2025. Campuses failing to comply would face penalties of up to $500,000 per ... incident."
www.science.org/content/arti...
Thank you so much for this tip! Infuriating change
Good eye! You’re right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isn’t 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:“perception”, but beats stimulus-only!
Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years 😅
Together, this argues for closed-loop visual prostheses:
📡 Record neural responses
⚡ Adapt stimulation in real-time
👁️ Optimize for perceptual outcomes
This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!
Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordings—especially pre-stimulus activity—makes predictions much better across all three perceptual outcomes.
And here’s the kicker: 🚨
If you try to predict perception from stimulation parameters alone, you’re basically at chance.
But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brain’s natural activity space.
We pushed further: Could we make V1 produce new, arbitrary activity patterns?
Yes ... but control breaks down the farther you stray from the brain’s natural manifold.
Still, our methods required lower currents and evoked more stable percepts.
Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methods—linear, inverse neural network, and gradient optimization—produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.
Prediction is only step 1. We then inverted the forward model with 2 strategies:
1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)
Both shaped neural responses far better than conventional 1-to-1 mapping