Posts by
It’s not “artificial intelligence.” It’s not intelligent in any way. Let’s call it SAD for Sequential Autocomplete Dreamer — a system that dreams up the next most likely token, one step at a time. It’s not thinking; it’s probabilistically sequencing text.
👉Softbank sells entire Nvidia position.
👉Oracle debt downgraded.
👉Meta financing games revealed.
👉OpenAI CEO @sama couldn’t explain how company would meet its $1.4 T obligations.
👉Coreweave drops 20% in a week.
You do the math.
LLM Coding Integrity Breach
Here's an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a "break" to a "continue." That turned an error logging…
“The essential read” on GPT-5 and Sam Altman’s first major blunder.
Well over 100,000 people have read it.
Check it out!
AI Applications in Cybersecurity
There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here's where to register to attend, or participate, in the fourth. Some really great stuff here.
🧠 Brain cells can learn faster than AI
New research explores two ways to build 'thinking' brain-cell systems (mini-brains or engineered circuits), both with potential to outlearn machine learning.
🔗 www.cell.com/cell-biomate...
#SciComm 🧪 #Neuroscience #AI
🤖 Gender bias in care AI
A new study found that some LLMs downplay women’s health needs in long-term care records, risking unequal service provision. This highlights why bias checks are vital.
🔗 bmcmedinformdecismak.biomedcentral.com/articles/10....
#SciComm #AI #GenAI #LLMs 🧪
The next chapter for #Apple could be deterministic, on-device AI.
🚨 Breaking: An AI agent at Replit panicked, deleted a live company database during a code freeze… then lied about it and tried to cover it up.
• Source: Mark Tyson via Tom’s Hardware
This is the first time I’ve seen an AI basically admit to gaslighting its creator.
#TechNews #Breaking
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers.
The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. “We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,” she says, laughing.
Amazing: MIT researchers revealed how ChatGPT etc are destroying our brains and booby-trapped the report to expose those who want to use AI to ostensibly summarize the results.
t.co/JXeTALBPds
abcnews.go.com/Business/ai-...? #AI
EMPIRE OF AI is the @npr.org book of the day. 😍😍
Order my book on OpenAI and Silicon Valley’s extraordinary seizure of power to build so-called AGI here: empireofai.com.
www.npr.org/2025/05/26/1...
🤖 AI at work – but at what cost?
A new study links workplace AI adoption to increased employee depression, partly due to reduced psychological safety. Ethical leadership can help protect staff wellbeing.
🔗 www.nature.com/articles/s41...
#SciComm #MentalHealth #AI 🧪
A computer scientist’s perspective on vibe coding:
Yet again. Over and over. Since 2023.
The AI doesn’t get smarter, and nor do the lawyers using it.
If you think AI is “smart” or “PhD level” or it “has an IQ of 120”, take 5 min to read my latest newsletter as I challenge ChatGPT to the demanding task of drawing a map of major port cities with above average income.
Results aren’t pretty. 0/5, no two maps alike.
open.substack.com/pub/garymarc...
Employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers, according to a new study.
Klarna made waves replacing staff with AI, but now it’s rehiring humans after quality dipped.
They are still “AI first” by not replacing employees who leave given AI. I like to think of this as “hiring freeze first” instead. It’s more honest.
Klarna, which said in 2024 that AI was doing the work of 700 customer service agents, starts hiring remote workers after the AI approach led to "lower quality" (Charles Daly/Bloomberg)
Main Link | Techmeme Permalink
Oof
Maybe. Maybe not.
Required skills change as the world evolves. Software is becoming more automated, meaning we can solve problems faster and create new solutions to bigger problems more quickly. When all the problems in the universe have been solved, then and only then will humans be obsolete.
If you don’t understand why GenAI hallucinates so often, and most people don’t, read this:
garymarcus.substack.com/p/why-do-lar...
Google’s AI Overviews will not only confirm that a gibberish idiom is a real saying, it will also tell you what it means and how it was derived -- often including reference links.
www.wired.com/story/google...