Advertisement · 728 × 90

Posts by Vito Botta

Preview
Ramp AI Index March 2026 update Anthropic is still on a tear, with nearly one in four businesses on Ramp paying for Claude compared to one in 25 a year ago. OpenAI adoption fell by 1.5%.

Ramp data shows Anthropic hit 30.6% business adoption in March, up from 24.4% in February. OpenAI stayed flat at ~35%. That's a 6.2 point swing in one month.

Anthropic wins 70% of head-to-head matchups with OpenAI for first-time AI buyers. And some people still think that OpenAI is untouchable.

10 hours ago 1 0 0 0

OpenAI's lawyers just called Elon Musk's latest lawsuit amendments a 'legal ambush' weeks before a trial that could hit $100B+ in damages.

Musk wants Altman removed. The trial starts April 27. Whatever you think of either side, this is shaping up to be one of the biggest tech legal battles ever.

13 hours ago 0 0 0 0

Data science tools running in production are becoming primary targets, and the time to exploit window keeps shrinking. Patch immediately if you're using Marimo.

2/2

14 hours ago 0 0 0 0

Marimo, an open-source Python notebook for data science, had an RCE flaw exploited within 10 hours of disclosure.

No PoC existed, attackers built their exploit directly from the advisory description. The vulnerability was an unauthenticated WebSocket endpoint that gave full shell access.

1/2

14 hours ago 0 0 1 0

Losing key people from a strategic initiative to a direct competitor is never ideal, but Meta has been aggressive on AI talent. Wonder if this affects their infrastructure roadmap or if the projects are mature enough to run without the original architects.

2/2

1 day ago 0 0 0 0

Three senior executives who helped launch OpenAI's Stargate initiative are leaving to join Meta. Stargate was their massive infrastructure push for AI data centres.

1/2

1 day ago 0 0 1 0

If you're using Axios in any CI/CD pipelines, worth checking your dependency locks if you haven’t already.

2/2

1 day ago 0 0 0 0

OpenAI just confirmed their GitHub workflow downloaded a malicious Axios library on March 31 during macOS app signing.

No user data compromised, but the timing is interesting. Axios is a pretty common dependency, and it was forked and modified to inject malicious code last week or so.

1/2

1 day ago 0 0 1 0

But what really matters IMO is the signal: Meta now considers architectural innovations too valuable to share. The race has changed quite a lot.

3/3

1 day ago 0 0 0 0

The efficiency claims are interesting - Meta says it matches Llama 4 Maverick with ten times less compute through "thought compression." The Intelligence Index scores from Artificial Analysis show Muse Spark at 52 vs Maverick's 18.

2/3

1 day ago 1 0 1 0
Advertisement

Been reading through the Muse Spark documentation and it's fascinating seeing Meta pivot from "we open-source everything" to proprietary. Llama 1 through 4 were all open-weight. Muse Spark isn't.

1/3

1 day ago 0 0 1 0

The bugSWAT events are producing serious results too. The AI VRP focusing on prompt injection, data exfiltration, and rogue behaviour makes sense given where the new attack surface is.

2/2

1 day ago 0 0 0 0

Google's VRP paid out $17M in 2025, up 40% from the previous year. 700 researchers contributed.

What caught my eye is the dedicated AI Vulnerability Reward Program they launched - already paid $350K for AI-specific bugs.

1/2

1 day ago 0 0 1 0
Y2K 2.0: The AI security reckoning - Anil Dash A blog about making culture. Since 1999.

What's weird is the governance gap - different AI platforms making different calls on who gets access and when. It's messy, and nobody seems to have thought it through properly.

www.anildash.com/2026/04/10/y...

2/2

1 day ago 0 0 0 0
Y2K 2.0: The AI security reckoning - Anil Dash A blog about making culture. Since 1999.

That Anil Dash piece on "Y2K 2.0" hit home. LLMs are churning out vulns faster than disclosure timelines can keep up. Every major platform's getting picked apart by these things.

1/2

1 day ago 0 0 1 0

Got decent GPUs? You can run near-frontier models without ever calling an API. The gap between open and closed has narrowed dramatically. That's a big deal for teams building AI products and who care about privacy.

Self-hosting isn't the fallback option it used to be.

2/2

1 day ago 1 0 0 0

Something changed in Q1 2026. Open-source models are now genuinely competitive with what the big proprietary players offer, and quite a few solid options can be self-hosted.

1/2

1 day ago 0 0 1 0

Usual's $16M bug bounty on Sherlock is now the largest active bounty in tech history. The previous record was $10M for Wormhole's cross-chain vulnerability.

I really need to learn this web3 stuff...

2 days ago 0 0 0 0

It's a flaw in the security mechanism itself, not just another injection point. Rails apps using SafeBuffer with the % operator for formatting could be exposing XSS vulnerabilities without realising their protection layer is compromised.

2/2

2 days ago 0 0 0 0
Advertisement

CVE-2026-33170 is fascinating because it breaks Rails' own XSS protection system. SafeBuffer#% operator fails to propagate the html_unsafe flag when creating new buffers, so content that should be escaped gets marked as safe.

1/2

2 days ago 0 0 1 0

An SSRF in the Fal provider means a malicious relay can have the agent fetch internal URLs and leak metadata through the generated output.

I switched from OpenClaw to Hermes Agent a couple of weeks ago, and I need to explore in detail how Hermes handles this stuff.

2/2

2 days ago 0 0 0 0

From over a week ago but anyway, CVE-2026-34504 in OpenClaw's image generation pipeline is a reminder that AI agent frameworks inherit all the classic web vulnerabilities plus their own unique attack surface.

1/2

2 days ago 1 0 1 0

This isn't theoretical anymore. The attack surface has fundamentally changed, and most organisations haven't updated their threat models to account for machines that can plan and execute campaigns on their own.

Skynet is closer than we think :p

2/2

2 days ago 0 0 0 0

That CyberStrikeAI campaign hitting 600+ firewalls across 55 countries is absurd. An AI agent operating autonomously, making its own decisions about which targets to hit next, no human operator required.

1/2

2 days ago 0 0 1 0

The idea that a model needs gated access because it could discover exploits too effectively is a new threshold. We've talked about AI cybersecurity risks for years, but this is the first time an AI company is explicitly saying "this model is dangerous enough we won't ship it publicly." Amazing

2/2

2 days ago 0 0 0 0

Anthropic launched Project Glasswing with Claude Mythos, a model so capable at finding vulnerabilities they're only releasing it to a consortium of 40+ tech companies for defensive work. Apple, Amazon, Microsoft are in.

1/2

2 days ago 0 0 1 0

LXD CVE-2026-34179 lets restricted certificate users escalate to cluster admin by modifying their own Type field. Canonical patched 4.12-6.7. The incomplete denylist pattern is worth checking in your own permission systems.

2 days ago 0 0 0 0
Advertisement

GLM-5.1 matches Opus 4.6 on coding, open-source, and runs autonomous tasks for 8 hours. Zhipu raising prices 8-17% despite being open. Interesting economics signal.

3 days ago 0 0 0 0

Meta spent $14B trying to catch up in AI. Muse Spark uses parallel agents for complex reasoning, but is the tech actually better or just piggybacking on Facebook's massive user base?

3 days ago 0 0 0 0

FBI wiretap breach declared "major incident" and the vector was a commercial ISP vendor. If the FBI can't secure their supply chain for surveillance data, what hope do the rest of us have? Third-party risk IS the attack surface.

3 days ago 1 0 0 0