Advertisement · 728 × 90

Posts by Aadam

Meet Tuna: a brand new, modern, modal launcher for macOS
Meet Tuna: a brand new, modern, modal launcher for macOS Friends, I've been spending late nights building my very own, complete and perfect launcher for macOS. It's called Tuna.🌐 GET IT: https://tunaformac.com💬 D...

Finally recorded the Tuna introduction video www.youtube.com/watch?v=vkm...

1 month ago 36 4 4 1

London #neuroscience people you may like this. We're hosting a series of talks at Imperial & Crick on how to get experiment and theory working together better. Each session will have a talk around this and extended networking / group discussion on the questions raised. Plus, free food!

🤖🧠🧪

1 month ago 31 11 1 0
Reclaiming my mind in the age of AI 👋 Hello World Welcome to my humble digital abode! In my first post on this newly minted Ghost blog, I aim to articulate my goals and motivations behind this blog, and make some resolutions for myself and some promises to you (my dear reader or future self). Before delving into my motivations, goals, and promises, let me first quickly introduce myself. My name is **Aadam** (yeah, that's my full name), and I'm currently doing a PhD in Computer Science. I won't say I'm quite an interesting fellow, but I do have some interests: 🤫 You can click on the highlighted text above to expand it further. There are some redeeming qualities to my character as well, but I'm not going to reveal all the goodies in our first meet-and-greet now, would I? You'll have to stick around (if intrigued) to find out more. # Why this Blog? To answer this question, I'll have to take you on a tangent and tell you a woeful story. Once upon a time there lived a starry-eyed boy, eager to learn and make his mark on the world. Inspired by sci-fi movies, and novels like I-Robot, he dreamed of creating truly intelligent machines one day. To realize his dream, he learned to code, enjoyed going through dense C++ manuals, and participated in several coding competitions representing his institution. Though with the passage of time, his interests shifted, his duties increased, and his priorities changed. Life happened. And more importantly, the world evolved. Suddenly, skills that were sought after and valued before were becoming obsolete. Even though AI wasn't truly intelligent yet, it became proficient enough to replace some of the skills that required intelligence before. Skills such as programming and writing, which required immense effort before, were being delegated to AI Agents. And more importantly, if you didn't use these new technologies, you'll get left behind. So, with time, he started relying on these technologies, and stopped developing and reinforcing his own skills in that particular domain. And slowly but surely, his skills atrophied. This is one of the main reasons behind the "Why" of this blog for me. I don't want my skills and my capabilities to fade away. I want to practice and improve my writing skills, in a carefree, safe, and personal environment, where I don't have to worry about meeting deadlines or quotas. I can polish my skills at my own pace, writing about what I want, and developing my own voice. There will surely be many errors (given that English isn't my first language), but that's fine. After all, you ~~only~~ mainly learn from your mistakes. I don't want to be entirely dependent on AI tools for writing and thinking. > “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” > > ― **Frank Herbert,**Dune I know that AI agents will be prevalent in the foreseeable future, and they are just tools (not actually intelligent beings for now 🤨), and we should use them to efficiently perform our tasks. They have their roles in writing, coding, brainstorming, re/searching, prototyping, and more, and can aid us in reaching our goals much more quickly and efficiently. I'm not against their usage. I regularly use them a lot to automate/skip mundane tasks. I just don't want to loose my own skills in the process. I have noticed this gradual skill decay personally and there have been some public reports on this as well. After the advent of calculator, it wasn't really necessary to memorize/practice complex calculations when you can simply get the answer quickly. That skill isn't required anymore. I wonder what skills will get obsolete after the Agentic AI era. So, the aim of this blog is quite selfish I'd say. I just want to develop my writing skills. I want to be able to confidently articulate my thoughts for a public audience. I really enjoyed the following quote by Brandon Sanderson (one of my favorite fantasy authors) in his recent talk where he discussed why he doesn't consider the AI generated output to be "Art". > Remember art is not just the story. It is not just the painting or the sculpture or whatever else you love to create. It's also the process of creation and what that process did to you. We make art because we can't help it. It's part of us. We understand what it is. We are drawn to it because we are of the same substance. We are the arts. > > Brandon Sanderson – We Are The Art | Brandon Sanderson’s Keynote Speech The basic idea is that "Art" isn't the end product (a generated poem, drawing, painting, novel), but the journey one took to get to that end product. And that's why I'm starting this blog. To go on a journey to rediscover myself and redevelop my skills. To share what I learn along the way. To revel in the joy of writing, living, and learning. # What to expect? 100% Human generated, error-prone prose. That's my only commitment to both myself and you, my dear reader. Again, if I use AI, then it would defeat the whole purpose of this blog. So, from brainstorming, to outlining, to writing, and finally editing, everything will be done by me and me alone. And this is a personal blog, so don't expect any adherence to a specific niche topic. I'll write about whatever topic catches my attention at that moment. I'm mainly interested in: Artificial Intelligence (Machine Learning, Deep Learning, Reinforcement Learning), Note-taking (Obsidian, Logseq, AnyType, Thymer, Tana), Programming (Julia, Go, Python), Fantasy Novels, Academic Life, and more. * * * So, if you want to get to know me more, learn about the journey I'm embarking on, and track my progress on this exciting path, stick around, introduce yourself in the comments, and follow along. If not, I still thank you for reading my incoherent thoughts and sticking till the end of this post. Looking forward to writing and sharing more, Insha'Allah.



👋Hello World
Welcome to my humble digital abode!

In my first post on this newly minted Ghost blog, I aim to articulate my goals and motivations behind this blog, and make some resolutions for myself and some promises to you (my dear reader or future self).

Before delving into my motivations […]

1 month ago 2 1 0 0
Python for Computational Science Week, 7-15 Feb 2026

Start with what you know, end with what you need. 
Python Week makes it doable.

Python for Computational Science Week, 7-15 Feb 2026 Start with what you know, end with what you need. Python Week makes it doable.

We’ve kicked off #Python for #ComputationalScience Week,...but it’s not too late to join!

Come learn, practice, and build momentum!

Catch up on #PythonWeek here:
www.reddit.com/r/neuromatch...

1 month ago 8 3 0 0

Looking forward to the invite

3 months ago 2 0 0 0
RL Debates 2: Fritz "learning for the sake of learning" Sommer
RL Debates 2: Fritz "learning for the sake of learning" Sommer YouTube video by Sensorimotor AI

RL Debates 2: Fritz "learning for the sake of learning" Sommer

Fritz introduced an information-theoretic, first-principles approach to modeling exploration through the maximization of "predicted information gain."

📽️ Watch the full presentation here: www.youtube.com/watch?v=rlF-...

🧠🤖🧠📈

5 months ago 12 5 1 0
Post image Post image Post image

Neuromatch Academy 2026 is coming!

✅ Hands-on projects
✅ Global community
✅ Affordable fees

Applications open Feb 2026.

Learn more: neuromatch.io/courses/
Sign up for updates: neuromatch.io/mailing-list/

#ComputationalNeuroscience #DeepLearning #ClimateScience #NeuroAI #Neuroscience #AI

5 months ago 12 9 0 0
✅ What's the Most Surprising Capability Monty Gains Through Sensorimotor Learning #sensorimotorai
✅ What's the Most Surprising Capability Monty Gains Through Sensorimotor Learning #sensorimotorai YouTube video by Thousand Brains Project

🚨 @vivianeclay.bsky.social and @cortical-canonical.bsky.social respond to “What's the Most Surprising Capability Monty Gains Through Sensorimotor Learning?”

youtube.com/shorts/lUQkb...

Read the paper: arxiv.org/abs/2507.04494
Read the plain language explainer: thousandbrains.org/thousand-bra...

5 months ago 3 2 0 0
Advertisement
Video

I’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time!

www.biorxiv.org/content/10.1...

6 months ago 220 86 9 9
Post image

Exciting news! 🎉 Our Computational Neuroscience course has been awarded NIH BRAIN Initiative funding! Students will get hands-on experience w real BRAIN Initiative datasets, helping them build computational skills that are essential for the future of neuroscience.

www.linkedin.com/feed/update/...

6 months ago 75 25 1 1
Post image

“Perception as Inference” is a century-old idea that has inspired all major theories in neuroscience 🧠, including:

✅ Sparse Coding
✅ Predictive Coding
✅ Free Energy Principle
& more!

In my new blog post, I build the intuition behind this idea from ground up 👉[1/6]🧵

🧠🤖🧠📈

9 months ago 24 5 1 1

Google Glass walked so Meta Ray Ban Glasses could also walk

6 months ago 191 16 10 2
Post image

What drives behavior in living organisms? And how can we design artificial agents that learn interactively?

📢 To address these, the Sensorimotor AI Journal Club is launching the "RL Debate Series"👇

w/ @elisennesh.bsky.social, @noreward4u.bsky.social, @tommasosalvatori.bsky.social

🧵[1/5]

🧠🤖🧠📈

6 months ago 36 10 2 5
Logseq DB - Task Management - Unfiltered and Unedited
Logseq DB - Task Management - Unfiltered and Unedited YouTube video by H D

An overview of @logseq DB Task Management youtu.be/ITCcMFNSSmw?... with the new Schema and improved UX #logseq #TaskManagement #pkm #Productivity

6 months ago 4 1 0 1
Post image

It is almost time to welcome you all in Santa Cruz! 🦕

We will start with an exciting and timely keynote by
@guyvdb.bsky.social
on "Symbolic Reasoning in the Age of Large Language Models" 👀

📆 Full conference schedule: 2025.nesyconf.org/schedule/

6 months ago 18 5 1 0
Diagram of how the "collaborative modelling of the brain" (COMOB) project started. Starting material lead to group research or solo research, coming together in online workshops (monthly) in an iterative cycle, finishing with writing up together. The diagram is illustrated with colourful cartoon blob characters.

Diagram of how the "collaborative modelling of the brain" (COMOB) project started. Starting material lead to group research or solo research, coming together in online workshops (monthly) in an iterative cycle, finishing with writing up together. The diagram is illustrated with colourful cartoon blob characters.

Is anarchist science possible? As an experiment, we got together a large group of computational neuroscientists from around the world to work on a single project without top down direction. Read on to find out what happened. 🤖🧠🧪

7 months ago 75 28 2 3
Comic strip of four colored stick-figure characters. In the first panel, green, pink, orange characters speak into megaphones. In the second panel, the green character looks frustrated while the others are silent as they look at him. In the third panel, the pink and orange characters chat with each other, while the green character says, “Bah, I give up. Not getting any engagement here.” In the last panel, the green character walks away as the pink and orange continue their conversation. Text at the bottom reads “@debbieohi.com.”

Comic strip of four colored stick-figure characters. In the first panel, green, pink, orange characters speak into megaphones. In the second panel, the green character looks frustrated while the others are silent as they look at him. In the third panel, the pink and orange characters chat with each other, while the green character says, “Bah, I give up. Not getting any engagement here.” In the last panel, the green character walks away as the pink and orange continue their conversation. Text at the bottom reads “@debbieohi.com.”

Looking for more engagement on Bluesky?

I've compiled tips with t he help of others in the community: publish.obsidian.md/debbieohi/bl...

#BlueSkyTips

7 months ago 156 36 7 5
Advertisement
Task-Optimized Convolutional Recurrent Networks Align with Tactile Processing in the Rodent Brain

Task-Optimized Convolutional Recurrent Networks Align with Tactile Processing in the Rodent Brain

1/ What if we make robots that process touch the way our brains do?
We found that Convolutional Recurrent Neural Networks (ConvRNNs) pass the NeuroAI Turing Test in currently available mouse somatosensory cortex data.
New paper by @Yuchen @Nathan @anayebi.bsky.social and me!

10 months ago 18 5 1 7
Post image

Introducing Bases, a new core plugin that lets you turn any set of notes into a powerful database.

Now available to everyone with Obsidian 1.9!

7 months ago 434 93 15 54
Preview
a horse drawn carriage going through a grassy area ALT: a horse drawn carriage going through a grassy area

Join us on a (mathematical) journey to a shire - oops, HIGHER - standard and principled evaluation schema for our benchmark datasets. This is the reward of the RINGS framework.

📒 Blog: aidos.group/blog/rings/
📃 Paper: doi.org/10.48550/arX...
👩‍💻 Code: github.com/aidos-lab/ri...

8 months ago 5 2 0 0
Preview
Sensory responses of visual cortical neurons are not prediction errors Predictive coding is theorized to be a ubiquitous cortical process to explain sensory responses. It asserts that the brain continuously predicts sensory information and imposes those predictions on lo...

1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:

www.biorxiv.org/content/10.1...

I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.

But...

🧠📈 🧪

8 months ago 220 79 13 5
Post image

1/
🚨Another New Paper Drop! 🚨 “Hierarchy or Heterarchy? A Theory of Long-Range Connections for the Sensorimotor Brain”

👇 Dive into the full thread 🧵
arxiv.org/abs/2507.05888

8 months ago 8 4 1 0
A Fundamental Unit Of Intelligence
A Fundamental Unit Of Intelligence YouTube video by Artem Kirsanov

🔥 Want to understand how the neocortex builds intelligence?

Artem Kirsanov made a great video on the Thousand Brains Theory, the foundation of everything we’re building at here Thousand Brains Project!

🎥 youtu.be/Dykkubb-Qus
#Neuroscience #AI #ThousandBrains #Neocortex

8 months ago 5 2 0 1

Hello world! This is the RL & Agents Reading Group

We organise regular meetings to discuss recent papers in Reinforcement Learning (RL), Multi-Agent RL and related areas (open-ended learning, LLM agents, robotics, etc).

Meetings take place online and are open to everyone 😊

8 months ago 37 12 1 3
Post image

Announcing the new "Sensorimotor AI" Journal Club — please share/repost!

w/ Kaylene Stocking, Tommaso Salvatori, and @elisennesh.bsky.social

Sign up link: forms.gle/o5DXD4WMdhTg...

More details below 🧵[1/5]

🧠🤖🧠📈

8 months ago 24 12 1 0

Super excited to share our new paper! We spent the past years building an alternative AI approach, and now we demonstrate a whole range of advantages. Robust object & pose detection, generalization, data and compute efficient training, continual learning, shape bias, intelligent policies, and more!🦾

8 months ago 4 2 0 0
Advertisement
Preview
General agents need world models Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of gene...

Nice paper arxiv.org/abs/2506.01622

8 months ago 5 3 1 0
Preview
Log-Normal Multiplicative Dynamics for Stable Low-Precision Training of Large Networks Studies in neuroscience have shown that biological synapses follow a log-normal distribution whose transitioning can be explained by noisy multiplicative dynamics. Biological networks can function sta...

Together with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs.

This group got it working!

arxiv.org/abs/2506.17768

May be a great way to reduce AI energy use!!!

#MLSky 🧪

8 months ago 39 13 3 0
Post image

New paper dropped! “Hierarchy or Heterarchy? A Theory of Long-Range Connections for the Sensorimotor Brain” The Thousand Brains Theory explains the cortex’s non-hierarchical connections, and why they matter for building machine intelligence.

Read it now: arxiv.org/abs/2507.05888

8 months ago 12 3 0 3

1/
This week, we’re releasing two milestone papers: one shows the amazing capabilities of thousand-brains systems and their benefits over deep learning, the other proposes a new theory of long-range connections in the neocortex. Years of work led to this.

8 months ago 11 2 1 4