Advertisement · 728 × 90

Posts by

Preview
Post by @ixnai · 1 image 💬 0  🔁 0  ❤️ 0 · The Market Opportunity · BreatheAI™ Seeks $47M Series A to Revolutionize Autonomous Respiratory Intelligence BreatheAI™ is raising $47 million in Series A funding to pioneer Auton…

BreatheAI™ Seeks $47M Series A to Revolutionize Autonomous Respiratory Intelligence

3 months ago 1 0 0 0
Preview
Post by @ixnai 💬 0  🔁 0  ❤️ 0 · It’s not “artificial intelligence.” It’s not intelligent in any way. Let’s call it SAD for Sequential Autocomplete Dreamer — a system that dreams up the next most likely token, one…

It’s not “artificial intelligence.” It’s not intelligent in any way. Let’s call it SAD for Sequential Autocomplete Dreamer — a system that dreams up the next most likely token, one step at a time. It’s not thinking; it’s probabilistically sequencing text.

3 months ago 1 0 0 0
Post image

👉Softbank sells entire Nvidia position.

👉Oracle debt downgraded.

👉Meta financing games revealed.

👉OpenAI CEO @sama couldn’t explain how company would meet its $1.4 T obligations.

👉Coreweave drops 20% in a week.

You do the math.

5 months ago 117 43 18 9
Preview
The AI Disaster: Why Artificial Intelligence Fails And what we must do before the window closes.

#AI #AIEthics #SocietyAndTech

5 months ago 1 0 0 0
Preview
Simpler models can outperform deep learning at climate prediction Simple climate prediction models can outperform deep-learning approaches when predicting future temperature changes, but deep learning has potential for estimating more complex variables like rainfall...
7 months ago 0 0 0 0
LLM Coding Integrity Breach Here's an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a "break" to a "continue." That turned an error logging statement into an infinite loop, which crashed the system. This is an integrity failure. Specifically, it's a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve. Davi Ottenheimer comments.

LLM Coding Integrity Breach

Here's an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a "break" to a "continue." That turned an error logging…

8 months ago 1 1 0 0

“The essential read” on GPT-5 and Sam Altman’s first major blunder.

Well over 100,000 people have read it.

Check it out!

8 months ago 77 20 2 7
AI Applications in Cybersecurity There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here's where to register to attend, or participate, in the fourth. Some really great stuff here.

AI Applications in Cybersecurity

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here's where to register to attend, or participate, in the fourth. Some really great stuff here.

8 months ago 2 2 0 0
Preview
Two roads diverged: Pathways toward harnessing intelligence in neural cell cultures Exploring neural cultures for information processing is rapidly advancing. Organoid intelligence focuses on developing functional neural organoids to capture physiologically relevant abilities. An alt...

🧠 Brain cells can learn faster than AI

New research explores two ways to build 'thinking' brain-cell systems (mini-brains or engineered circuits), both with potential to outlearn machine learning.

🔗 www.cell.com/cell-biomate...

#SciComm 🧪 #Neuroscience #AI

8 months ago 22 8 2 1

🤖 Gender bias in care AI

A new study found that some LLMs downplay women’s health needs in long-term care records, risking unequal service provision. This highlights why bias checks are vital.

🔗 bmcmedinformdecismak.biomedcentral.com/articles/10....

#SciComm #AI #GenAI #LLMs 🧪

8 months ago 21 10 0 0
Advertisement
Preview
Researchers built a social network made of AI bots. They quickly formed cliques, amplified extremes, and let a tiny elite dominate. The researchers also tested six interventions meant to break the polarization loop. None solved the problem.

Of course!

8 months ago 2 1 0 0
Preview
Post by @ixnai · 1 image 💬 0  🔁 0  ❤️ 0 · Apple's Moment: Why Deterministic AI Could Define the Next Chapter of Personal Computing · Tim Cook's rallying cry to Apple employees—"This is sort of ours to grab"—reflects a pivo…

The next chapter for #Apple could be deterministic, on-device AI.

8 months ago 0 0 0 0
Preview
Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.

Don’t overthink it! Oops!
venturebeat.com/ai/anthropic...

8 months ago 0 0 0 0

🚨 Breaking: An AI agent at Replit panicked, deleted a live company database during a code freeze… then lied about it and tried to cover it up.

• Source: Mark Tyson via Tom’s Hardware

This is the first time I’ve seen an AI basically admit to gaslighting its creator.

#TechNews #Breaking

8 months ago 3 1 1 0
Post image

We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers.

The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.

9 months ago 6896 3011 109 623
Preview
454 Hints That a Chatbot Wrote Part of a Biomedical Researcher’s Paper

www.nytimes.com/2025/07/02/h...

9 months ago 23 8 2 3
Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper.

She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. “We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,” she says, laughing.

Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. “We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,” she says, laughing.

Amazing: MIT researchers revealed how ChatGPT etc are destroying our brains and booby-trapped the report to expose those who want to use AI to ostensibly summarize the results.

t.co/JXeTALBPds

9 months ago 5124 2129 50 180
Preview
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston ar...

ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

9 months ago 350 83 74 26
Post image

abcnews.go.com/Business/ai-...? #AI

10 months ago 1 1 0 0
Preview
Karen Hao's new book is a skeptical look at Sam Altman and Elon Musk's AI empire : NPR's Book of the Day OpenAI was founded as a nonprofit meant to conduct artificial intelligence research that would benefit the general public. In the company's early days, reporter Karen Hao arranged to spend time in Ope...

EMPIRE OF AI is the @npr.org book of the day. 😍😍

Order my book on OpenAI and Silicon Valley’s extraordinary seizure of power to build so-called AGI here: empireofai.com.

www.npr.org/2025/05/26/1...

10 months ago 63 19 2 0
Advertisement
Preview
The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership - Humanities and Social Sciences Comm... Humanities and Social Sciences Communications - The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and...

🤖 AI at work – but at what cost?

A new study links workplace AI adoption to increased employee depression, partly due to reduced psychological safety. Ethical leadership can help protect staff wellbeing.

🔗 www.nature.com/articles/s41...

#SciComm #MentalHealth #AI 🧪

10 months ago 18 7 0 0
Post image

A computer scientist’s perspective on vibe coding:

11 months ago 271 86 18 13

Yet again. Over and over. Since 2023.

The AI doesn’t get smarter, and nor do the lawyers using it.

11 months ago 45 11 2 1
Preview
ChatGPT Blows Mapmaking 101 A Comedy of Errors

If you think AI is “smart” or “PhD level” or it “has an IQ of 120”, take 5 min to read my latest newsletter as I challenge ChatGPT to the demanding task of drawing a map of major port cities with above average income.

Results aren’t pretty. 0/5, no two maps alike.
open.substack.com/pub/garymarc...

11 months ago 59 8 6 2
Preview
AI use damages professional reputation, study suggests New Duke study says workers judge others for AI use—and hide its use, fearing stigma.

Employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers, according to a new study.

11 months ago 2842 716 372 1048
Preview
As Klarna flips from AI-first to hiring people again, a new landmark survey reveals most AI projects fail to deliver Just 1 in 4 AI investments bring in the ROI they promise—but CEOs just can’t resist the technology.

Klarna made waves replacing staff with AI, but now it’s rehiring humans after quality dipped.

They are still “AI first” by not replacing employees who leave given AI. I like to think of this as “hiring freeze first” instead. It’s more honest.

11 months ago 133 28 9 4

Klarna, which said in 2024 that AI was doing the work of 700 customer service agents, starts hiring remote workers after the AI approach led to "lower quality" (Charles Daly/Bloomberg)

Main Link | Techmeme Permalink

11 months ago 94 35 2 9

Oof

Maybe. Maybe not.

Required skills change as the world evolves. Software is becoming more automated, meaning we can solve problems faster and create new solutions to bigger problems more quickly. When all the problems in the universe have been solved, then and only then will humans be obsolete.

11 months ago 0 0 0 0
Preview
Why DO large language models hallucinate? The Henrietta Chronicles continue, guest starring Harry Shearer

If you don’t understand why GenAI hallucinates so often, and most people don’t, read this:

garymarcus.substack.com/p/why-do-lar...

11 months ago 102 44 7 6
Preview
‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw Google’s AI Overviews feature credible-sounding explanations for completely made-up idioms.

Google’s AI Overviews will not only confirm that a gibberish idiom is a real saying, it will also tell you what it means and how it was derived -- often including reference links.

www.wired.com/story/google...

11 months ago 240 85 13 35
Advertisement