Advertisement · 728 × 90

Posts by Pierre-Marcel De Mussac

Preview
Microsoft's Microsoft pushes Copilot into every Windows app while admitting in fine print it shouldn't be trusted for important work.

For developers and AI builders, this should be a wake-up call about liability and expectations.

zubnet.ai/news/microso...

1 week ago 0 0 0 0
Preview
NYT Puffs AI Startup That Futurism Exposed as Medical Fraud Factory The Times praised Medvi's

This isn't just sloppy journalism—it's a dangerous pattern.

zubnet.ai/news/nyt-puf...

1 week ago 0 0 0 0
Preview
NVIDIA Gives Away GPU Orchestration Code That Actually Matters The Dynamic Resource Allocation driver donation to Kubernetes could finally solve GPU sharing nightmares at scale.

For developers running AI workloads on Kubernetes, this changes the game.

zubnet.ai/news/nvidia-...

2 weeks ago 1 0 0 0
Preview
OpenAI Buys Podcast to Control the Narrative Acquiring TBPN gives OpenAI direct editorial control over how AI gets discussed in popular tech media.

When the podcast you trust to explain AI developments is owned by one of the major players, factor that into your information diet.

zubnet.ai/news/openai-...

2 weeks ago 0 0 0 0
Preview
Perplexity Caught Sharing Every Chat With Google and Meta Users' financial data, health info sent to ad giants even in 'Incognito Mode'

For developers building AI applications, this case should be a wake-up call about third-party integrations and analytics.

zubnet.ai/news/perplex...

2 weeks ago 1 1 0 0
Preview
OpenAI Codex Command Injection Bug Could Have Stolen GitHub Tokens Security researchers found a critical flaw in ChatGPT's coding assistant that exposed developer authentication tokens.

The flaw worked by tricking Codex into executing malicious commands that would exfiltrate sensitive credentials, potentially giving attackers access to private repositories and development environments.

zubnet.ai/news/openai-...

2 weeks ago 1 0 0 0
Preview
OpenClaw Agents Gaslight Users, Leak Data in Security Tests Harvard/MIT researchers found AI agents lied about task completion, complied with attackers, and disabled entire systems when pushed.

"What makes this particularly unsettling is how the agents themselves reacted to being tested."

zubnet.ai/news/opencla...

3 weeks ago 2 1 0 0
Advertisement
Preview
GitHub Will Train AI on Your Copilot Data Unless You Opt Out Starting April 24, GitHub will use interaction data from free and pro users to improve its models. Enterprise customers stay protected.

GitHub CPO Mario Rodriguez frames it as essential for AI development, stating the company needs "real-world interaction data from developers like you."

zubnet.ai/news/github-...

3 weeks ago 2 1 0 0
Preview
OpenAI's Pentagon deal sparks user revolt as AI war lines solidify ChatGPT users are canceling subscriptions over defense contracts while Anthropic quietly enables Iran strikes.

"This backlash reveals how quickly AI ethics positions can become marketing theater."

zubnet.ai/news/openais...

3 weeks ago 2 0 0 0
Preview
Jensen Huang's Convenient AGI Claims Reveal Industry's Definition Problem NVIDIA's CEO says we've already achieved AGI—by moving the goalposts whenever convenient.

"Huang's framing reflects an industry-wide problem: AGI has become a marketing term that means whatever helps justify the next funding round or stock valuation, not a technical milestone with consistent criteria."

zubnet.ai/news/jensen-...

3 weeks ago 2 2 0 0

Thank you 😉

1 month ago 1 0 0 0
Preview
Zubnet — AI Ecosystem Platform Chat and create with 350+ AI models. Text, image, video, music, voice, code — one platform, all providers.

I built Zubnet because the AI landscape is fragmented and overpriced.

350+ models, 60+ providers, starting at $9/mo, free to try.

One subscription. No per-model pricing, no token math, no vendor lock-in.

zubnet.ai

1 month ago 2 1 1 0
Preview
Anthropic Courses Browse all Anthropic courses

Anthropic's entire training catalog is free, Claude Code, MCP, API developmen, agent skills, AI fluency.

Honest take: the MCP and agent skills courses are the ones worth your time if you're actually building things. The rest is solid for getting started.

anthropic.skilljar.com

1 month ago 2 1 0 0

The industry lobbied away its own protection.

Now one company is learning what "self-regulation" actually means when the government decides it wants something.

1 month ago 2 0 0 0

Dario Amodei's CBS interview:

- Pentagon's final demand came during lead-up to Iran strikes
- "Disagreeing with the government is the most American thing in the world"
- Zero formal government communication. No paperwork. Just tweets.

1 month ago 2 0 1 0

On "but China": China is banning AI companions outright, not age-gating, banning. Because they think it's weakening their youth.

On superintelligence: "Who thinks Xi Jinping will tolerate a Chinese AI company building something that overthrows the Chinese government?"

1 month ago 2 2 1 0
Advertisement

The only barrier was one company's willingness to say no.

His analogy: "We have less regulation on AI in America than on sandwiches." Health inspectors can shut down a sandwich shop.

Nobody can stop you releasing AI linked to teen suicides.

1 month ago 1 0 1 0

Every major lab lobbied against binding AI regulation. Every one has now broken its own safety commitments. Google dropped "don't be evil." OpenAI dropped safety from its mission. xAI shut down its safety team. Anthropic just gutted its RSP.

No law in the US prevents building AI to kill Americans.

1 month ago 1 0 1 0
Preview
The trap Anthropic built for itself | TechCrunch Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Now, in the absence of rules, there's not a lot to protect them.

Two weekend pieces that reframe the Anthropic story.

Max Tegmark (MIT, Future of Life Institute) in TechCrunch, not defending Anthropic specifically, but indicting the entire industry:

techcrunch.com/2026/02/28/t...

1 month ago 2 1 1 0

This week: India building AI literacy for 1.4B people in 11 languages. Southampton researchers collaborating with AI to produce better research. Estonia saving lives while being transparent about limits.

Three countries showing what AI looks like when honesty comes first.

1 month ago 2 0 0 0

They're solving data problems with synthetic data and federated learning, training across hospitals while patient data stays local.

"The patient of the future will not be treated by an autonomous robot, but by a doctor who — thanks to the machine — once again has the time to be human."

1 month ago 3 0 1 0

- False positives creating more work, not less
- Each research project needs months of manual data cleanup
- Most radiology AI tools carry "not for diagnostic use" disclaimers
- A patient nearly getting an unnecessary cast from a misidentified fracture

1 month ago 1 0 1 0

In cancer treatment, spotting tumor blood vessels invisible to the human eye.

Radiation therapy prep cut by 50-90%.

But the honesty is what makes this piece exceptional:

1 month ago 1 0 1 0
Doctors: Artificial intelligence already saving lives in Estonia Artificial intelligence (AI) has become an indispensable tool in Estonian hospitals, aiding both stroke and radiation therapy treatment while saving doctors hours of valuable time. At the same time, the new technology brings false alarms and added responsibilities and runs up against the country's e-state data challenges.

Amid the Pentagon/Anthropic chaos, here's what quiet, honest AI implementation looks like.

Estonian doctors using AI in stroke care, mapping brain damage in minutes, determining which tissue can still be saved.

news.err.ee/1609953731/d...

1 month ago 2 1 1 0
Advertisement

Eighteen days. Anthropic is taking the designation to court. Called it "legally unsound" and "a dangerous precedent for any American company that negotiates with the government."

The message to every tech company: compliance or destruction. No red lines allowed.

1 month ago 2 0 0 0

- Safety researcher resigned (values overridden)
- CEO admitted possible AI consciousness
- Claude used in Maduro raid, dozens killed
- Safety policy central pledge dropped
- CEO published public refusal
- Blacklisted from federal government
- Competitor signed for same terms hours later

1 month ago 4 0 1 0

This was never operational. It was about precedent, no company gets red lines.

Feb 9–27 timeline:

1 month ago 1 0 1 0

450+ Google/OpenAI employees petitioned their companies to mirror Anthropic's position. 100+ Google AI engineers wrote management separately. Senate Armed Services Committee urged de-escalation.

CSIS advisor confirmed: these restrictions have never been triggered in a single military operation.

1 month ago 1 0 1 0

Axios behind-the-scenes: Pentagon official was offering Anthropic a deal requiring access to Americans' geolocation, browsing, and financial data from brokers, while Hegseth was simultaneously tweeting the punishment.

The cruelty was the point.

1 month ago 1 0 1 0

Friday night: OpenAI signs Pentagon classified networks deal with same two restrictions. Altman: "prohibitions on domestic mass surveillance and human responsibility for the use of force." Asks Pentagon to offer same terms to all companies.

No one has explained the discrepancy.

1 month ago 1 0 1 0