I wasn't too shocked when an anon reply guy started pestering me + other orgs/journalists on X.
Normally I’d ignore it, but I looked closer when they began running paid ads. I did not expect to trace the account back to OpenAI’s political machine!
My latest for @themidasproject.bsky.social
Posts by The Midas Project
An anonymous account spent months attacking OpenAI’s critics with misleading claims and paid ads.
We found links to Targeted Victory, the firm at the center of OpenAI's $125 million political operation.
www.modelrepublic.org/articles/is-...
Read our full article on why voluntary frameworks are failing and how independent standards and auditing could improve the situation: www.modelrepublic.org/articles/ai-...
A regulatory regime built around self-authored rules and inadequate compliance monitoring will naturally converge on weak, vague, and ineffective standards. We need something stronger.
We are seeing this play out across the industry. Findings from @safetychanges show that OpenAI, Google, xAI, and Anthropic have all recently evaded, ignored, or weakened their safety rules to keep pace.
Between California’s SB53, New York’s RAISE Act, and the EU AI Act, multiple regulatory regimes have emerged that are built around companies writing their own rules.
But as commercial pressures are increasing, this system is failing.
When California passed a law requiring AI companies to follow their safety frameworks, Anthropic had a choice to make.
Instead of making their strict, voluntary rules legally binding (as they had indicated they wanted), the company wrote a second, weaker rulebook in its place.
So long as this race continues, we should expect continued weakening of standards + companies taking every opportunity they can to minimize liability.
We'll keep tracking how their commitments evolve. Follow
@safetychanges.bsky.social for document-level analysis.
When updating the RSP to its largely-weaker v3.0 last month, Jared Kaplan told TIME “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
If the most safety-conscious lab treats binding legislation as an exercise in minimizing liability rather than an opportunity to codify best practices, the “race to the top” they were hoping for collapses.
This sets a grim precedent. So far, no other companies have created a separate (weaker) policy for compliance purposes, but seeing Anthropic’s example, they might soon.
The result is a dual-track system with a voluntary RSP containing detailed commitments yet imposing no liability, alongside a vague and discretionary policy that absorbs all the liability.
The new FCF also strips out key governance mechanisms found in the RSP.
It removes the detailed Capability and Safeguard Reports and eliminates the formal oversight role previously held by the Long-Term Benefit Trust regarding model deployment decisions.
This if-then structure was the core strength of the original RSP, and now it is gone (and not only from the FCF, but also from the RSP itself insofar as they have revoked their commitment to always follow the voluntary RSP). time.com/7380854/excl...
The FCF is like the RSP if you tore out much of its substance.
It drops the RSP's if-then structure by attaching no specific, binding mitigations to its risk tiers. The text simply states that mitigations "may be determined when the relevant risk tier is reached."
They released a second (and largely weaker + more vague) document that would substitute for the RSP for compliance purposes, the Frontier Compliance Framework (FCF).
By doing so, they ensured that all the promises within the RSP weren’t rendered enforceable by SB 53.
You’d think they’d be excited to have succeeded in codifying the RSP model.
But, just before SB 53 took effect, Anthropic significantly watered down its obligations with a last-minute change.
Anthropic endorsed the bill, suggesting it would hold all companies to the standard of Anthropic’s RSP. In the past, they have discussed the RSP as a prototype for regulation (even as the substance of their policy has changed).
Over time, other companies adopted similar policies, and by 2025, this standard was put into law: California’s newly passed SB 53 *required* frontier AI developers to publish and follow safety frameworks like RSPs.
In 2023, Anthropic worked with METR to adopt the first Responsible Scaling Policy (RSP).
The RSP was a set of “if-then” commitments (if a model hits capability X, implement safeguard Y, or pause). It was the most rigorous commitment any frontier AI lab had made at the time.
Over at @safetychanges.bsky.social, we just released an analysis of Anthropic’s recent, quiet change to their Frontier Compliance Framework.
But a more interesting story (which many missed) is the fact that this policy exists at all — and how it minimizes liability for the company. 🧵
Musk was one of the loudest voices warning about AI and the risks of autonomous warfare. Now he’s set the precedent every other AI company is being pressured to follow.
Anthropic has refused, blocking surveillance and autonomous weapons use. Secretary of Defense Hegseth threatened to invoke the Defense Production Act to force them to comply and gave them until 5:01pm Friday to back down.
Musk didn’t just go back on what he promised, he also opened the door for the Pentagon to pressure every other AI company to accept the same terms. Google and OpenAI are reportedly in talks with the Pentagon to expand their model access for unrestricted DoD use.
Since then, Musk has aided the government surveillance he once challenged: X continues sharing data with law enforcement through Dataminr.
It’s not just weapons. In 2023, X petitioned SCOTUS to be able to disclose how often the government demanded its users' data, arguing that "surveillance of electronic communications is both a fertile ground for government abuse and a lightning-rod political topic…"
But in 2026, SpaceX and xAI entered the Pentagon's competition to build autonomous drone swarms — exactly what he pledged not to do. A Pentagon official confirmed that the drones will be used for offensive purposes.
In 2018, Musk signed a pledge alongside DeepMind's founders, 3,800+ individuals and 274 organizations:
"We will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons."
This week, xAI agreed to let the Pentagon use Grok in classified systems under the “all lawful use” standard, enabling Grok to be used for autonomous weapons and domestic mass surveillance (something other companies have objected to).
Elon Musk spent a decade warning the world about killer robots and AI surveillance, including in a 2017 open letter: "We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
Now he’s the one opening it. 🧵