Advertisement · 728 × 90

Posts by Sam Winter-Levy

Preview
Opinion | For all but two nations, the AI race is already over Middle powers such as Europe and Canada need to get smart about artificial intelligence.

wapo.st/4uuds1L

1 month ago 0 0 0 0
Post image

For the @washingtonpost.com, Anton Leicht and I wrote about how middle powers need to get smart about AI competition.

1 month ago 0 1 1 0
Preview
The AI Divide How U.S.-Chinese competition could leave most countries behind.

In @foreignaffairs.com, Anton Leicht and I wrote about how middle powers should navigate an AI revolution that threatens to leave them behind: www.foreignaffairs.com/guest-pass/r...

2 months ago 1 2 1 0
Preview
The AI Divide How U.S.-Chinese competition could leave most countries behind.

Middle powers can either find strategic niches backed by real leverage or bear AI’s costs while capturing few benefits. The latter outcome: two great powers barreling toward a technological revolution with most of the world’s computing power and talent, leaving most of the world’s citizens behind:

2 months ago 0 0 0 0

The US, meanwhile, should make bandwagoning as attractive as possible—promoting exports that deliver meaningful capabilities and implementing security standards that allow even sensitive frontier systems to be shared with allies.

2 months ago 0 0 1 0

Beyond access, middle powers need economic leverage: control irreplaceable inputs (eg ASML) or downstream deployment bottlenecks (eg robotics, manufacturing). Don’t sell strategic assets for short-term gains.

2 months ago 0 0 1 0

Three broad strategies to ensure that access: Bandwagoning with US/China for guaranteed access (risky if patron turns). Hedging between both powers (fails if world splits into blocs). Sovereignty through domestic capability (expensive, often leaves you stranded in second tier).

2 months ago 0 0 1 0
Advertisement

To avoid that outcome, middle powers need frontier access. Firms equipped with inferior AI risk being outcompeted. National defense will require systems as good as your adversaries.

2 months ago 0 0 1 0

Opting out isn’t an option either. The risks of AI—cybercrime, military deployment by adversaries, labor displacement—arrive whether or not the benefits do. For middle powers, suffering the costs of AI while missing the gains is the central danger.

2 months ago 0 0 1 0

Some middle powers are building local data centers or domestic model champions (eg France's Mistral). Neither solves dependency. Data center buildouts are expensive and need continuous updates from providers; no one outside the US and China is closing the frontier model gap.

2 months ago 0 0 1 0

Current access to frontier AI is fragile. Unlike stockpiled goods, AI requires real-time access to infrastructure controlled by a few Silicon Valley firms—ultimately subject to US export controls. China is building alternatives but remains behind for now.

2 months ago 0 0 1 0

Middle powers face three problems: (1) Access to frontier AI depends on Washington/Beijing’s whims (2) They’re exposed to AI’s harms regardless of whether they share in its benefits (3) They lack leverage to shape AI’s development or manage its consequences.

2 months ago 0 0 1 0
Preview
The AI Divide How U.S.-Chinese competition could leave most countries behind.

In @foreignaffairs.com, Anton Leicht and I wrote about how middle powers should navigate an AI revolution that threatens to leave them behind: www.foreignaffairs.com/guest-pass/r...

2 months ago 1 2 1 0
Video

Today's Lawfare Daily is a Scaling Laws ep, with @utexaslaw.bsky.social, where @alanrozenshtein.com spoke to @samwl.bsky.social, Janet Egan & @petereharrell.bsky.social about the Trump admin’s decision to allow Nvidia and AMD to export AI semiconductors to China for a 15% payment to the U.S. govt.

8 months ago 16 5 1 0
Advertisement
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

Read @samwl.bsky.social and Nikita Lalwani on how AI advances could undermine nuclear deterrence—and “encourage mistrust and dangerous actions among nuclear-armed states”:

8 months ago 6 1 0 1
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

Appreciated this balanced look at the impact of quote-unquote AI on nuclear deterrence - more of this, please.

Does AI present new nuclear risks? Yes.
Are there hard limits to AI's capabilities? Also yes. www.foreignaffairs.com/united-state...

8 months ago 1 1 0 0
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

"So long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another. "
@foreignaffairs.com
www.foreignaffairs.com/united-state...

8 months ago 7 1 0 0
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

Fascinating read. So many things combined in this article: deterrence theory, nuclear doctrines, AI development. Not cheerful but insightful.

www.foreignaffairs.com/united-state...

8 months ago 7 4 0 0
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

Full piece here; @carnegieendowment.org: www.foreignaffairs.com/united-state...

8 months ago 1 0 0 0

And there's no room for complacency. Rapid AI takeoffs could cross unforeseen thresholds. States should stress-test nuclear systems for AI-related vulnerabilities, build AI/nuclear expertise, and calibrate messaging about the stakes of the AGI race.

8 months ago 0 0 1 0

None of this is to say AI will pose no risks to nuclear stability. The moves states make to shore up their second strike capabilities—building more weapons, reducing decision timelines, delegating authority—may be destabilizing and dangerous.

8 months ago 0 0 1 0

Nuclear deterrence will likely hold, and the coercive leverage that advanced AI affords states (against rivals with well postured nuclear forces) will thus face major limits.

8 months ago 0 0 1 0

Even with highly capable AI systems, states will struggle to be confident of simultaneous success against multiple legs of a nuclear triad, with limited data, limited options for testing, and no room for error.

8 months ago 0 0 1 0

Tracking launchers at scale is very challenging, the physics of missile defense are brutal, states will do everything they can to protect their command and control systems.

8 months ago 0 0 1 0
Advertisement

In all three domains, as we document, AI can likely help. But in all three domains, AI will also face serious constraints.

8 months ago 0 0 1 0

So could AI erode nuclear deterrence? Theoretically, yes, through three mechanisms: 1. Increased ability to track nuclear platforms (subs and road-mobile launchers); 2. increased ability to tamper with command-and-control systems; 3. improved missile defense.

8 months ago 0 0 1 0

The US economy is 15x Russia's and 1000x North Korea's, yet the US's influence over them is limited, to put it mildly.

8 months ago 0 0 1 0

Obviously AI will matter a lot. But unless it erodes nuclear deterrence, no matter how many economic/military advantages it may bring, states will face major constraints in dealing with nuclear armed adversaries.

8 months ago 0 0 1 0

An increasing number of analysts claim AGI will entirely transform international politics, giving a decisive strategic advantage to the state that possesses it—an advantage akin to complete military and political dominance.

8 months ago 2 0 1 0
Preview
The End of Mutual Assured Destruction? What AI will mean for nuclear deterrence.

In @foreignaffairs.com, Nikita Lalwani and I write about the idea that winning the AI race will give one state unchallenged global dominance. To do so, we argue, it would have to undercut nuclear deterrence—no small feat. www.foreignaffairs.com/united-state...

8 months ago 2 2 2 0