Advertisement · 728 × 90

Posts by Karim Habashy

At one point, research output will scale massively with compute.

5 days ago 0 0 0 0
Preview
Fellowships We welcome applications from outstanding researchers applying for external fellowship funding.

If you're interested in doing a 2 year Schmidt "AI in science" postdoc fellowship in neuroscience/AI stuff with me starting in July or Oct 2027 take a look at this and get in touch soon. We've had a lot of luck recently getting these fellowships.

www.imperial.ac.uk/electrical-e...

6 days ago 13 5 0 1

Idea for grant proposal formative assessment. Very short proposal (1 page say) and reviewers are each allowed 3 questions which you get a very limited character count to respond to. Overall effort hugely reduced and lets the proposal evolve within a round rather than needing resubmission. Thoughts?

1 week ago 23 4 7 1

I used to think that the biggest harm LLMs can do are deep fakes, and I think that can be controlled. But if they can exploit software infrastructures that power today's economy and security....that is a whole new ballgame.

1 week ago 0 0 0 0

Is it time to focus on Alignment? The rate of advancements of LLM is kind of scary. Though, I personally believe they might not lead us to human-like AGI, and I also don't advertise for them, still their rate of advancement in code databases and cybersecurity warrants caution.

1 week ago 0 0 1 0
Nicholas Carlini - Black-hat LLMs | [un]prompted 2026
Nicholas Carlini - Black-hat LLMs | [un]prompted 2026 YouTube video by unprompted

Given yesterday’s news about Anthropic, this is an excellent talk from 2 weeks ago about how LLMs exploit 0-day vulnerabilities and how dramatically their capabilities have increased.

youtu.be/1sd26pWhfmg?...

1 week ago 8 4 0 0
AI Rollout Is a People Problem: A Pulse on All Things AI, Part 2 - The Scholarly Kitchen This post explores the human decisions needed in implementing AI at organizations.

Will AI agents and automated editors be the first readers of your next academic journal submission?

"The system makes preliminary evaluative judgments that humans then review. The human role shifts from doing the assessment to auditing the assessment." scholarlykitchen.sspnet.org/2026/04/08/a...

1 week ago 4 3 0 1
Advertisement
ARC-AGI-3
ARC-AGI-3 YouTube video by ARC Prize

What are the chances that a hobbyist can solve this? 🤔
www.youtube.com/watch?v=f_xT...

2 weeks ago 0 0 0 0

Personally, I prefer if a field is standardized and I feel for LLMs that might take some time. It is better to invest time in something that lasts. Unfortunately, sometimes, if you wait too long you miss the "good opportunities" train. There is always the need of luck in adopting new technologies.

2 weeks ago 0 0 0 0
Is RAG Still Needed? Choosing the Best Approach for LLMs
Is RAG Still Needed? Choosing the Best Approach for LLMs YouTube video by IBM Technology

In my opinion, the problem with LLM applications is the rate of change in the norm of coding and tools. Every now and then a new idea comes out that might offer benefits to your application at the cost of re-coding a "may be not so small" chunk of your application. www.youtube.com/watch?v=UabB...

2 weeks ago 0 0 1 0

Science would be so much better if we did review (of grants and papers) constructively and collaboratively, instead of only using them to produce binary accept/reject decisions. To do that, we have to separate review processes from decisions. One idea for grants 👇

4 weeks ago 23 6 2 0

I like the idea and I am thinking of an extension for it. Grants can be divided in two phases like you said: 1) Question phase, if successful leads to 2-3 month mini grant with a prototype result. 2) if the prototype is satisfactory and the follow-up questions seems good, a full grant can be given.

4 weeks ago 2 1 0 0

Had great fun discussing some recent work and the value of toy models in neuroscience last night!

Our next workshop will be:
* 16.04
* 16-20:30
* At the Crick

Sign up (for free) and come along!
www.eventbrite.co.uk/e/understand...

1 month ago 15 6 0 1
Deadline Extension

Teaching Assistant and student applications close on 22 March

Deadline Extension Teaching Assistant and student applications close on 22 March

📣 We've extended the application deadline for #Neuromatch and #Climatematch Academy Students and Teaching Assistants by one week! 🚨

📅 New deadline: March 22, midnight in your local time.

➡️ Apply now: portal.neuromatchacademy.org/sign-in

Tag someone who almost missed out!

#TA #SummerSchool

1 month ago 8 4 0 2

I might be wrong about this, but as an early career researcher, this is my current conviction.

1 month ago 0 0 0 0

Hot take: I prefer if subfields of machine learning not use the key-query terminology (as a mental state) in every possible chance. I think, it biases thinking towards a lower-D subspace and limits our ability to draw more general conclusions and analogies. For example, see contrastive learning.

1 month ago 0 0 1 0
Advertisement

On first thought, I have this feeling that Chaos and truly random events (like radioactive decay) will be provide predictive challenges to any artificial super intelligence.🤔

1 month ago 0 0 0 0
Preview
Meta reportedly plans sweeping layoffs as AI costs increase Sources tell Reuters layoffs could affect 20% or more of company as plans reflect broader tensions within big tech

Is this the beginning?
www.theguardian.com/technology/2...

1 month ago 0 0 0 0
Post image

I find this plot very telling about the current state of LLMs...maybe more parameters or engineering overheads will help? Source: arcprize.org/arc-agi/2/

1 month ago 0 0 0 0

I my opinion, I think embedding models need a little bit bigger share of the praise than is due compared to transformers. Whenever there is talk about LLMs most of the argument is warped around transformers, but I think without good embedding infrastructure, this would have been more difficult.

2 months ago 0 0 0 0
Preview
Anthropic AI safety researcher quits with 'world in peril' warning It comes in the same week an OpenAI researcher resigned amid concerns about its decision to start testing ChatGPT ads.

What is true? what is hype? What is an ad for AI? www.bbc.com/news/article...

2 months ago 0 0 0 0

Maybe the goal is to capture the attention of the said CEO in the hope of a seat at the table? 😆

2 months ago 1 0 1 0
Post image

I asked Chatgpt (the free version) about "an idea for a scientific paper I can work". Here is the first option from its answer 😆:

2 months ago 0 0 0 0

It doesn't get the hidden connection between the two topics, the underlying structure, it lists them as separate......Self organizing is one way to achieve dimensionality reduction! It has no grasp of this.

2 months ago 0 0 0 0
Advertisement
Post image

1) Activity-dependent self-organization models 2) Dimension-reduction models.

2 months ago 1 0 1 0

This might be a good test case as of why LLMs are not the answer: I tried using Scispace to do a literature review on the "role of visual experience in the development of visual feature maps". At one point it presented two theoretical frameworks for this topic: ........

2 months ago 1 0 1 0

LLMs have a wider impact on the human mental physique than previously thought

2 months ago 1 0 0 0
Preview
Constructing AI in education Photo by Lachlan Donald on Unsplash Most definitions of “AI in education” start from technical categories. First there was rules-based AI, followed by data-driven predictive AI, and now generative …

"Stakeholders construct AI differently ... in ways that are useful to them ... and these differences have significant social and educational implications.” codeactsineducation.wordpress.com/2026/01/16/c...

3 months ago 7 5 1 0

Maybe a not so smart take, but there are four types (or subtypes) of ON-OFF directionally selective retinal ganglion cell in mice, upward, downward, backward and forward (Sans and Mashland 2015).
Yet, why didn't nature use an efficient 2-cell binary code for these directions? Nature makes me 🤪.

6 months ago 0 0 0 0

That was a very amusing article to read. My new favourite quote is "Unfortunately, people have a considerable ability to ‘explain away’ events that are inconsistent with their prior beliefs".
I think there ought to be more support/encouragement for work that independently verifies other results.

7 months ago 1 0 0 0