AnD tHe BRaiN is JuST liKe aN AnN... JusT... goTTa ... FINd ... tHe ... bACkpRop...
Seriously though... Neuromodulators are cooler than anything anyone in NeuroAI gives them credit for.
Maybe norepinephrine and the LC's few hundred neurons is actually all you need.
Posts by Brad Aimone
We should fund science more not less. But the cynical part of me thinks this is an opportunity to maybe stop funding the same neuroscience questions with just a fancier microscope or just one region over time and time again.
We need to cure diseases and fix AI. Not keep doing the same thing.
Excited to be in San Antonio for the UTSA AI Matrix THOR Neuromorphic Commons kickoff!
THOR will be one of the first community resources fully dedicated for accessing scalable neuromorphic hardware! Check it out! #NeuroAI #Neuromorphic 🧪🧠🤖
www.neuromorphiccommons.com/events/thor_...
Am I alone in being skeptical when seeing "We need WORLD MODELS because that's what the BRAIN does!"?
Won't this just be another 'AI tech bros use the brain to get attention and $$$ but ignore it at the first opportunity'?
Why trust any of the AI crowd to talk about the brain? We need real #NeuroAI
How a place can ruin both coffee and donuts astounds me
I've always understood the "have to start somewhere" and the *hope* that visual cortex is all we need since it is easy to access and easy and intuitive for inputs.
But we're, what, 75 years into V1's reign over neuro? With little generalizable to neural disorders to show for it?
Let's move on.
Is there even such a thing as academic machine learning anymore?
If you defer leadership to industry, you relinquish any right to criticize the outcome being profit-centric.
Kudos to the BRAIN leadership for embracing the brain / AI / computing connection. That takes courage because so many neuros and AI tech bros deny the connection out of self-interest.
Maybe cynical, but any time a scientist says "We should leave AI to industry" it is because they are scared about $$ going to something they don't work on
I've heard for >10 years "let industry lead" and we have LLMs, power plants & data centers
#NeuroAI needs research, not just venture capital
There is plenty of space to innovate in #NeuroAI. The issue has been that neuroscientists don't even try as they assume industry will do it.
Deferring AI and neural computing to an industry that only cares about selling ads is not a way to help further our understanding of the brain.
You can take BlueSky out of Twitter, but you can't keep the Twitter tech bros out of BlueSky
Well, we have another 20 years until the cortex field rediscovers a hippocampus finding.
I agree. Deep learning has huge value on its own. But it isnt brain inspired in intent or practice
I sometimes see "debates" with LeCun or Dally, what is the point? To convince them? Of what? It's a different field
Neuro may be able to overcome ANN limitations, but the brain path won't come from DL
It's unfortunate, but there really isn't any other monetization path that justifies the insane capital expenses they're investing in.
In the end, I try to be practical about all of this. Philosophically we can debate about what true understanding means, but practically we want better and smarter AI algorithms and to be able to fix the brain. How do we do that? If digital isn't sufficient, what is the scalable alternative?
I'm not a Turing worshiper, but I think you're fixated too much on the discrete/analog thing. The brain isn't analog all the way down, synapses and ion channels are really stochastic discrete elements. Really the stochasticity, not the continuity, is where the brain diverges from classical computing
You're not guilty of this to begin with. :) That's like me saying my New Year's resolution is not to start every day off with a Bloody Mary.
Well, I certainly believe that there are better models to use than serial Turing machines. Though "analog!" is a pretty weak alternative for a number of reasons.
But saying Turing computation fundamentally cannot represent what the brain is doing is a very high theoretical bar to get over.
Take a look at this recent paper of ours. This isn't to say that the brain is doing conjugate gradient; but getting neurons to solve linear systems is not just possible, it is rather natural.
www.nature.com/articles/s42...
The brain clearly solves the same types of problems (control, inference, etc) in a different way. The same functions but different algorithms on a different model of computation. It isn't marginalizing the brain to say that it computes, it helps demystifies it. Which is what we have to do.
This is the danger of falling into the trap of implementations. Today we use a certain type of computer to do scientific computing and AI; but that doesn't mean that Von Neumann machines are the only type of computer or that sequential linear algebra is the only type of math that is useful.
Assuming the brain is representing the world for decision making and survival, it is effectively modeling the world with neurons. That's exactly what numerical computing is - modeling something with something else - the substrate is just different than transistors in a stored program architecture.
The brain isn't doing Runge-Kutta in floating point on a von Neumann architecture, but that doesn't mean the principles of applied math and theoretical computer science don't apply.
The brain isn't magic. Math and computer science apply to it, just like the laws of physics do.
Basically, I claim that neural computation is just any other numerical method, with limitations like any other and amenable to analysis like any other.
We simply don't yet know what that method is.
I get what youre saying but the brain is representing the world (external and body) inexactly. Whether digital or analog, discrete or continuous, it doesn't matter. The brain is approximating some other dynamics with its dynamics. That approximation has numerical limitations like anything else.
In fact, Id go so far as to argue that many neurological disorders are a breakdown of that robustness of neural computation that we take for granted.
That I disagree with. If the brain is computing, it has to do so reliably. Which means the same numerical stability issues matter. If I see a cat, I should always perceive a cat. And we do. Even if the underlying dynamics are chaotic.
To me, that is one of the biggest open questions in neuro.
My takeaway, which has stuck with me for ten years since, is that the brain's computations must be abstracted from the microsecond details of biophysics we can potentially measure. The timing of spikes matters, but relatively across a population, not individually.
Many years ago we had a study (file cabinet paper sadly) that showed no matter the numerical precision, the stiff nonlinearities and fanout of recurrent spiking circuits made simulations diverge.
At 1st this says simulations don't work, but the brain also has to operate reliably with such stiffness