Big Pasta Is Always Listening
Posts by Ridhi Bandaru
There seems to be a broad perception across psychology and neuroscience that work shouldn't be "too technical" in order to reach the broadest possible audience. While I think we should strive for accessibility, I feel that this attitude can also be self-defeating: why are we dumbing down?
Brown haired man in a mauve button up shirt in front of a bookcase, facing the camera.
#FeatureSession Alert! #EarlyCareerAwardWinner Stephen Ferrigno Stephen Ferrigno presents his work showing that monkeys and humans use similar, queue-like memory systems for hierarchical sequences, challenging traditional models of how structured information is represented #CO32026 π§ͺ
To them it was the best paper in town
To accompany my textbook (Computational Foundations of Cognitive Neuroscience) and the class I taught this semester, I'm open-sourcing my lectures slides:
gershmanlab.com/lectures.html
I'll continue to update these as I improve them.
Sorry πβΉοΈ
Very good article what it would take for AI to have "agency" and self-preservation goals (w/ a few quotes from me).
& I appreciate the reiterated debunking of the "GPT-4 on its own lied to a TaskRabbit worker to solve a captcha" story.
www.quantamagazine.org/why-do-we-te...
Hyped!
One of the wildest things I learned about planarian flatworms: you can isolate their pharynx (throat) and it will autonomously engage in feeding behavior.
www.science.org/doi/full/10....
Exactly. It risks a class disparity like weβve possibly never seen since feudal times, if not worse.
Presented by the Trustees of the Chantrey Bequest 1978
Claude Rogers, The Paraplegic, 1970
https://botfrens.com/collections/14375/contents/1124976
The science ( knowing *which* models to fit, *why*, and what the answer means) still requires expertise. It didn't happen automatically.
But the implementation did. What are we actually training our students *for*? And are we teaching the right things?
We really need to start answering this. 3/3
Studies show reading full article instead of just the title provides a better understanding of the subject matter
Is this typical @ too mong lidnβt read?
Chai w a biscuit conditional
I generally try to motivate the undergrads about the things I cover in my decision making class (e.g. 'you WILL encounter the Sunk Cost fallacy in your life and recognizing it could save your butt'),
but when it comes to counterfactuals I put this up:
heβs not wrongβ¦ π€·ββοΈsapir.psych.wisc.edu/papers/lupyan_bergen_201...
ππ₯
Excited about this! A lot of artists note itβs probably what it trickles down to, which I think is really astute (and even profound). Sayuri Bhanap (artist) puts it like
Want to build a computer inside a transformer? Well, now the code is out:
www.percepta.ai/blog/constru...
Some cats are just so damn cool.
Is this about that workshop with the undergrad organizer?
Clarification: the gardens donβt inherently have anything to do with museums β the Wikipedia article references the MNHN
An unexpected thing that blew my mind about this that I had to look up immediately:
Nuance: In French, a musΓ©e is generally for art/history, while a musΓ©um refers specifically to natural history.
!!!
Where is this?
Is there gonna be an edition of this workshop this year?
I know the French guy walking us through our stretches didn't mean it this way but I think we can all learn a bit from his instructions to "now the big breath, first, we inspire, then we expire"
What they actually argue (see their last section) is that *if* a neural network did these things, then it must be implementing a symbolic system under the hood, but this would shed no light on cognitive architecture itself (i.e., the algorithms that are being implemented).
I think Fodor & Pylyshyn's 1988 paper is possibly the most mischaracterized paper in the history of cognitive science. It's often cited as arguing that neural networks cannot achieve systematicity, compositionality, and productivity. But that's not what they actually argue...
"Skills that seemed the most technical and forbidding can turn out to be the ones most easily automated."
Or as Minsky said, when it comes to AI:
"Hard things are easy and easy things are hard."