Read why true personalization should actually make your work harder.
Read “The most dangerous AI is the one that knows what you want” ...but doesn't care what you need.
academy.socos.org/the-most-dan...
Posts by Vivienne Ming
Salesmen promise that 'Personalized AI' will revolutionize how we work and learn. In my latest newsletter, I break down why their definition of personalization is broken. Context-sensitivity isn't inherently good—and if your AI is making your life 'frictionless', it’s probably making you weaker.
In a crumbling, impossible castle tended by women who have forgotten what they're guarding, Pechaček has built a world with the logic of a dream and the texture of a tapestry. If you find yourself uncertain whether you understand it, lean in.
𝗦𝗰𝗶𝗙𝗿𝗶𝗱𝗮𝘆: Looking for a dose of the most wonderful weirdness? 𝑻𝒉𝒆 𝑾𝒆𝒔𝒕 𝑷𝒂𝒔𝒔𝒂𝒈𝒆 by Jared Pechaček trusts its own strangeness completely.
libro.fm/audiobooks/9...
Hey #NYC, come hear me read 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇 in person at ptknitwear.com/events/50196
Here’s my pitch for a Posterized headline: “AI Dementia Apocalypse Easily Avoidable…Says World’’s Greatest Genius”. Just think.
“We’re heading for an AI-fueled ‘dementia crisis,’ brain scientist warns”
nypost.com/2026/04/10/h...
𝐈 𝐆𝐨𝐭 𝐏𝐨𝐬𝐭𝐞𝐫𝐢𝐳𝐞𝐝. I said "we may be building AI that gradually erodes the cognitive reserve that protects against dementia, and we should measure it before it's too late."
The NY Post heard: "BRAIN SCIENTIST WARNS OF AI DEMENTIA APOCALYPSE."
Both things are somehow true.
Read the original paper, “A benchmark of expert-level academic questions to assess AI capabilities | Nature”, at www.nature.com/articles/s41...
The kind of intelligence that changes the world lives in the ill-posed problems, in the messy, uncertain spaces where there is no right answer, only hypotheses and exploration. If we only measure AI by its ability to distill expert consensus, we will never learn how to use it to explore the unknown.
We are testing these vastly complex associative engines as if they are contestants on an expert-level episode of Jeopardy.
If brilliance is whatever fits in a fully measured box, we ignore so much of what makes human and artificial intelligence special.
By relying entirely on "well-posed problems"—”each question has a known solution that is unambiguous and easily verifiable”—this benchmark forces both human and artificial intelligence into the realm of the well-posed. It actively de-incentivizes meta-learning and hybrid intelligence.
The questions have unambiguous, easily verifiable answers that can't just be Googled. As intended, state-of-the-art LLMs performed terribly.
This benchmark is better than all the others, and I hate it. It doubles down on what’s wrong with how we view cognition.
LLMs are maxing out all our standard benchmarks, so some researchers published "Humanity’s Last Exam" (HLE) in Nature. It consists of 2,500 expert-level, closed-ended questions across dozens of subjects (math, humanities, science).
𝐀𝐈𝐬 𝐚𝐫𝐞 𝐠𝐨𝐨𝐝 𝐚𝐭 𝐉𝐞𝐨𝐩𝐚𝐫𝐝𝐲—𝐬𝐨 𝐰𝐡𝐚𝐭? The ultimate test for AI reveals we still don't understand intelligence.
Read the original paper, “Generative AI Can Improve Performance and Engagement without Harming Learning” at papers.ssrn.com/sol3/papers....
This has been found in educational technology again and again. Learning isn’t an engagement problem to be gamified away. AI can play a huge role in building an exceptional mind, but that promise will never be met by learning tools that only create the illusion of knowing.
When you remove the productive friction of learning, the student doesn't change. If there is no clear boost in non-agent performance, then this isn't a pedagogical breakthrough. It’s just an automated crutch.
This is the classic "Efficiency Lie". Efficiency, engagement, and a low-end performance boost are exactly what you expect from an LLM. But if the student cannot replicate that performance without the AI, you haven't taught them a skill. You've just strapped them into a cognitive exoskeleton.
But then comes the actual purpose of education: learning…not not learning. The authors claim to find "evidence of" long-run skill development, but in academic speak "evidence of" translates to, "It wasn't statistically significant, but we really want it to be true."
What happens when you give thousands of students an AI tutor to help them debrief math problems? The results look great on a slide deck: student engagement grew (they “completed 36% more math problems”) and efficiency slightly improved (“they spent 3.9% less time per problem” with higher accuracy).
𝑺𝒕𝒖𝒅𝒆𝒏𝒕𝒔 𝑻𝒓𝒂𝒑𝒑𝒆𝒅 𝒊𝒏 𝒂𝒏 𝑬𝒅𝑻𝒆𝒄𝒉 𝑬𝒙𝒐𝒔𝒌𝒆𝒍𝒆𝒕𝒐𝒏. AI tutors make students faster and more engaged—do they actually learn anything?
No wonder I got a bunch of new Indian followers this morning :)
Don’t forget to read the book!
www.amazon.com/Robot-Proof-...
New love for 𝑹𝒐𝒃𝒐𝒕-𝑷𝒓𝒐𝒐𝒇…this time in #India! “Raising “robot-proof” kids: Why creativity and curiosity matter more than ever” timesofindia.indiatimes.com/life-style/p...