Advertisement · 728 × 90

Posts by Pasquale Minervini

yep!!

2 weeks ago 1 0 0 0
Post image

If you are interested in privacy-preserving clinical NLP, we are recruiting a postdoc at the University of Edinburgh! The work is on LLMs/VLMs, AI privacy, and real-world health data in secure research environments.
Apply by April 6th, 2026!

More details: elxw.fa.em3.oraclecloud.com/hcmUI/Candid...

2 weeks ago 5 2 2 0

My amazing colleagues Sid and Michael are looking for a postdoc! 👇

3 weeks ago 1 0 0 0
Siddharth - Home Sid's page

We are advertising a postdoc position to work on #generative #models, #structure #induction, and MI #estimation with Michael Gutmann as part of @genaihub.bsky.social !

elxw.fa.em3.oraclecloud.com/hcmUI/Candid...

Get in touch! (#ML #AI)

👉 homepages.inf.ed.ac.uk/snaraya3/
👉 michaelgutmann.github.io

3 weeks ago 4 3 0 2

Chatted with the amazing @elissawelle.bsky.social from @theverge.com about @rohit-saxena.bsky.social’s “Lost in Time” work (arxiv.org/abs/2502.05092) and much more! You can find the full article here 👇

4 months ago 1 0 0 0

I can think of at least 2-3 greater criminals in history btw

5 months ago 0 0 0 0
Post image

Check out Yu Zhao's (@yuzhaouoe.bsky.social) latest work, “Learning GUI Grounding with Spatial Reasoning from Visual Feedback” (www.arxiv.org/abs/2509.21552), done during his internship at MSR (@msftresearch.bsky.social)!

New SOTA 🏆 results on ScreenSpot-v2 (+5.7%) and ScreenSpot-Pro (+110.8%)!

5 months ago 2 1 0 0

trend: non-NVIDIA training

DeepSeek V3.1 was trained on Huawei Ascend NPUs

this one is a South Korean lab training on AMD

7 months ago 39 7 2 2
Post image

I really needed a Deep Research MCP server to use with Claude Code and other tools — here it is: github.com/pminervini/d...

7 months ago 6 0 0 0
Genie 3 and the future of neural game engines Google DeepMind just announced Genie 3 , their new promptable world model, which is another term for neural game engine. This is a big neura...

@togelius.bsky.social has thoughts on Genie 3 and games togelius.blogspot.com/2025/08/geni...

Fairly close to my own, though I didn't get the preview the tech.

Walking around a generated image-to-image world is not the same as playing a game. There are no game objectives.

8 months ago 15 3 4 0
Advertisement
diagram from Anthropic paper with an icon & label that says “subtract evil vector”

diagram from Anthropic paper with an icon & label that says “subtract evil vector”

quick diagram of Bluesky’s architecture and why it’s nicer here

8 months ago 72 5 4 1

*Test-Time

8 months ago 0 0 0 0
Preview
Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.

Anthropic research identifies “inverse scaling in test-time compute,” where longer reasoning degrades AI performance. On certain tasks, models become more distracted by irrelevant data or overfit to spurious correlations.
#MLSky

8 months ago 9 1 1 0
Post image

Supermassive congrats to Giwon Hong (@giwonhong.bsky.social) for the amazing feat! 🙂

8 months ago 3 1 0 0

Still not as bad as Microsoft Teams

8 months ago 734 168 12 5
Post image Post image

The amazing folks at EdinburghNLP will be presenting a few papers at ACL 2025 (@aclmeeting.bsky.social); if you're in Vienna, touch base with them!

8 months ago 12 0 0 0

Hm, hard disagree here. I really fail to see how this is misconduct akin to bribery, it's just a defense mechanism against bad reviewing practices. @neuralnoise.com

8 months ago 5 2 1 0
Post image

🚨 New Paper 🚨
How effectively do reasoning models reevaluate their thought? We find that:
- Models excel at identifying unhelpful thoughts but struggle to recover from them
- Smaller models can be more robust
- Self-reevaluation ability is far from true meta-cognitive awareness
1/N 🧵

9 months ago 12 3 1 0
Three panels at the top describe task types with example prompts:
	1.	Simple Counting Tasks with Distractors (Misleading Math & Python):
	•	Prompts mention an apple and an orange, with added irrelevant or confusing information (e.g., probabilistic riddle, Python code) before asking the straightforward question: “Calculate how many fruits you have.”
	2.	Regression Tasks with Spurious Features (Grades Regression):
	•	Given XML-style records about a student, the model must predict grades from features like sleep hours, social hours, and stress level. The challenge lies in identifying relevant vs. spurious attributes.
	3.	Deduction Tasks with Constraint Tracking (Zebra Puzzles):
	•	Complex logical reasoning puzzle with multiple interrelated clues. Example: “What position is the person who likes salmon at?” Constraints involve foods, names, and relationships like “to the left of.”

Bottom row contains 3 line plots comparing model performance across tasks:
	•	Misleading Math (Left Plot):
	•	Accuracy drops sharply for some models as reasoning tokens increase. Claude Sonnet 4 maintains high performance. o3 and DeepSeek R1 hold relatively stable accuracy; Qwen3 32B and QwQ 32B drop more.
	•	Grades Regression (Middle Plot):
	•	Shows negative RMSE (higher is better). Claude models remain strong across token counts; o3 also performs well. Qwen3 and QwQ struggle, with DeepSeek R1 performing modestly.
	•	Zebra Puzzles (Right Plot):
	•	Accuracy vs. average reasoning tokens. o3 and Claude Sonnet 4 maintain highest performance. Other models (e.g., DeepSeek R1, Qwen3 32B, QwQ 32B) show performance degradation or plateaus. Error bars reflect variability.

Each plot uses colored lines with markers to indicate different model names.

Three panels at the top describe task types with example prompts: 1. Simple Counting Tasks with Distractors (Misleading Math & Python): • Prompts mention an apple and an orange, with added irrelevant or confusing information (e.g., probabilistic riddle, Python code) before asking the straightforward question: “Calculate how many fruits you have.” 2. Regression Tasks with Spurious Features (Grades Regression): • Given XML-style records about a student, the model must predict grades from features like sleep hours, social hours, and stress level. The challenge lies in identifying relevant vs. spurious attributes. 3. Deduction Tasks with Constraint Tracking (Zebra Puzzles): • Complex logical reasoning puzzle with multiple interrelated clues. Example: “What position is the person who likes salmon at?” Constraints involve foods, names, and relationships like “to the left of.” Bottom row contains 3 line plots comparing model performance across tasks: • Misleading Math (Left Plot): • Accuracy drops sharply for some models as reasoning tokens increase. Claude Sonnet 4 maintains high performance. o3 and DeepSeek R1 hold relatively stable accuracy; Qwen3 32B and QwQ 32B drop more. • Grades Regression (Middle Plot): • Shows negative RMSE (higher is better). Claude models remain strong across token counts; o3 also performs well. Qwen3 and QwQ struggle, with DeepSeek R1 performing modestly. • Zebra Puzzles (Right Plot): • Accuracy vs. average reasoning tokens. o3 and Claude Sonnet 4 maintain highest performance. Other models (e.g., DeepSeek R1, Qwen3 32B, QwQ 32B) show performance degradation or plateaus. Error bars reflect variability. Each plot uses colored lines with markers to indicate different model names.

Inverse scaling of reasoning models

a research collab demonstrated that there are certain types of tasks where all top reasoning models do WORSE the longer they think

things like getting distracted by irrelevant info, spurious correlations, etc.

www.arxiv.org/abs/2507.14417

8 months ago 21 2 2 0
Advertisement

Reasoning is about variable binding. It’s not about information retrieval. If a model cannot do variable binding, it is not good at grounded reasoning, and there’s evidence accruing that large scale can make LLMs worse at in-context grounded reasoning. 🧵

10 months ago 55 9 4 3

Hi @ilsebyl.bsky.social welcome to bsky! 🚀🚀🚀

8 months ago 2 0 1 0
Preview
Paper page - Inverse Scaling in Test-Time Compute Join the discussion on this paper page

Sometimes, too much reasoning can hurt model performance! New research by Anthropic (@anthropic.com), by Aryo Pradipta Gema (@aryopg.bsky.social) et al.: huggingface.co/papers/2507....

8 months ago 5 0 0 0

“LLMs can’t reason” 😅

8 months ago 5 0 0 0

My "Math, Revealed" series is freely available to anyone -- no paywall! -- in the thread below.

9 months ago 137 53 6 5

There is a few more for another prompt and that’s it

9 months ago 1 0 0 0
Post image

Spotlight poster coming soon at #ICML2025
@icmlconf.bsky.social!
📌East Exhibition Hall A-B E-1806
🗓️Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
📜 arxiv.org/pdf/2410.12537

Let’s chat! I’m always up for conversations about knowledge graphs, reasoning, neuro-symbolic AI, and benchmarking.

9 months ago 11 2 1 2
Advertisement
Preview
What Counts as Discovery? Rethinking AI’s Place in Science

This essay by Nisheeth Vishnoi is a thoughtful meditation on the nature of science and a rebuttal to the notion that AI systems are going replace human scientists anytime soon. Worth reading.

nisheethvishnoi.substack.com/p/what-count...

9 months ago 73 12 4 1
Post image

"in 2025 we will have flying cars" 😂😂😂

9 months ago 399 91 8 35
Flowchart of the AXIS algorithm with 5 parts. The top-left has the memory, the centre-left has the user query, the centre-bottom has the final explanation, the centre has the LLM, and the right has the multi-agent simulator.

Flowchart of the AXIS algorithm with 5 parts. The top-left has the memory, the centre-left has the user query, the centre-bottom has the final explanation, the centre has the LLM, and the right has the multi-agent simulator.

Screenshot of the arXiv paper

Screenshot of the arXiv paper

Preprint alert 🎉 Introducing the Agentic eXplanations via Interrogative Simulations (AXIS) algo.

AXIS integrates multi-agent simulators with LLMs by having the LLMs interrogate the simulator with counterfactual queries over multiple rounds for explaining agent behaviour.

arxiv.org/pdf/2505.17801

10 months ago 8 1 0 0
Post image

'AI Safety for Everyone' is out now in @natmachintell.nature.com! Through an analysis of 383 papers, we find a rich landscape of methods that cover a much larger domain than mainstream notions of AI safety. Our takeaway: Epistemic inclusivity is important, the knowledge is there, we only need use it

11 months ago 13 3 1 0