At one point, research output will scale massively with compute.
Posts by Karim Habashy
If you're interested in doing a 2 year Schmidt "AI in science" postdoc fellowship in neuroscience/AI stuff with me starting in July or Oct 2027 take a look at this and get in touch soon. We've had a lot of luck recently getting these fellowships.
www.imperial.ac.uk/electrical-e...
Idea for grant proposal formative assessment. Very short proposal (1 page say) and reviewers are each allowed 3 questions which you get a very limited character count to respond to. Overall effort hugely reduced and lets the proposal evolve within a round rather than needing resubmission. Thoughts?
I used to think that the biggest harm LLMs can do are deep fakes, and I think that can be controlled. But if they can exploit software infrastructures that power today's economy and security....that is a whole new ballgame.
Is it time to focus on Alignment? The rate of advancements of LLM is kind of scary. Though, I personally believe they might not lead us to human-like AGI, and I also don't advertise for them, still their rate of advancement in code databases and cybersecurity warrants caution.
Given yesterday’s news about Anthropic, this is an excellent talk from 2 weeks ago about how LLMs exploit 0-day vulnerabilities and how dramatically their capabilities have increased.
youtu.be/1sd26pWhfmg?...
Will AI agents and automated editors be the first readers of your next academic journal submission?
"The system makes preliminary evaluative judgments that humans then review. The human role shifts from doing the assessment to auditing the assessment." scholarlykitchen.sspnet.org/2026/04/08/a...
What are the chances that a hobbyist can solve this? 🤔
www.youtube.com/watch?v=f_xT...
Personally, I prefer if a field is standardized and I feel for LLMs that might take some time. It is better to invest time in something that lasts. Unfortunately, sometimes, if you wait too long you miss the "good opportunities" train. There is always the need of luck in adopting new technologies.
In my opinion, the problem with LLM applications is the rate of change in the norm of coding and tools. Every now and then a new idea comes out that might offer benefits to your application at the cost of re-coding a "may be not so small" chunk of your application. www.youtube.com/watch?v=UabB...
Science would be so much better if we did review (of grants and papers) constructively and collaboratively, instead of only using them to produce binary accept/reject decisions. To do that, we have to separate review processes from decisions. One idea for grants 👇
I like the idea and I am thinking of an extension for it. Grants can be divided in two phases like you said: 1) Question phase, if successful leads to 2-3 month mini grant with a prototype result. 2) if the prototype is satisfactory and the follow-up questions seems good, a full grant can be given.
Had great fun discussing some recent work and the value of toy models in neuroscience last night!
Our next workshop will be:
* 16.04
* 16-20:30
* At the Crick
Sign up (for free) and come along!
www.eventbrite.co.uk/e/understand...
Deadline Extension Teaching Assistant and student applications close on 22 March
📣 We've extended the application deadline for #Neuromatch and #Climatematch Academy Students and Teaching Assistants by one week! 🚨
📅 New deadline: March 22, midnight in your local time.
➡️ Apply now: portal.neuromatchacademy.org/sign-in
Tag someone who almost missed out!
#TA #SummerSchool
I might be wrong about this, but as an early career researcher, this is my current conviction.
Hot take: I prefer if subfields of machine learning not use the key-query terminology (as a mental state) in every possible chance. I think, it biases thinking towards a lower-D subspace and limits our ability to draw more general conclusions and analogies. For example, see contrastive learning.
On first thought, I have this feeling that Chaos and truly random events (like radioactive decay) will be provide predictive challenges to any artificial super intelligence.🤔
I find this plot very telling about the current state of LLMs...maybe more parameters or engineering overheads will help? Source: arcprize.org/arc-agi/2/
I my opinion, I think embedding models need a little bit bigger share of the praise than is due compared to transformers. Whenever there is talk about LLMs most of the argument is warped around transformers, but I think without good embedding infrastructure, this would have been more difficult.
Maybe the goal is to capture the attention of the said CEO in the hope of a seat at the table? 😆
I asked Chatgpt (the free version) about "an idea for a scientific paper I can work". Here is the first option from its answer 😆:
It doesn't get the hidden connection between the two topics, the underlying structure, it lists them as separate......Self organizing is one way to achieve dimensionality reduction! It has no grasp of this.
1) Activity-dependent self-organization models 2) Dimension-reduction models.
This might be a good test case as of why LLMs are not the answer: I tried using Scispace to do a literature review on the "role of visual experience in the development of visual feature maps". At one point it presented two theoretical frameworks for this topic: ........
LLMs have a wider impact on the human mental physique than previously thought
"Stakeholders construct AI differently ... in ways that are useful to them ... and these differences have significant social and educational implications.” codeactsineducation.wordpress.com/2026/01/16/c...
Maybe a not so smart take, but there are four types (or subtypes) of ON-OFF directionally selective retinal ganglion cell in mice, upward, downward, backward and forward (Sans and Mashland 2015).
Yet, why didn't nature use an efficient 2-cell binary code for these directions? Nature makes me 🤪.
That was a very amusing article to read. My new favourite quote is "Unfortunately, people have a considerable ability to ‘explain away’ events that are inconsistent with their prior beliefs".
I think there ought to be more support/encouragement for work that independently verifies other results.