Very pleased to share that, per Solum's Legal Theory Blog, "How to Count AIs: Individuation and Liability for AI Agents," is officially "Highly Recommended."
Do what the man says, and "Download while it's hot!"
legaltheoryblog.com/2026/03/02/a...
papers.ssrn.com/sol3/papers....
Posts by Peter N. Salib
If Moltbook is freaking you out, I'd encourage you to think: What will mean if AIs turn out to be genuine agents with their own weird goals? How should society be organized to include such agents? What legal structures will most benefit humanity? Some thoughts:
📢 CLAIR Writers’ Retreat
Join us for a four-day retreat in the Texas Hill Country to advance Law & AI Safety research.
Feb 20–23, 2026. Mayan Ranch, TX. Lodging, meals + $1,000 honorarium. More details in 🧵
Apply by 12/12 — forms.gle/5rnPi8CLxJkd...
Thanks, Lior!
Pleased to share that, last week, the UH Law Faculty voted to grant me tenure.
I’m grateful to my colleagues across the academy, my mentors, friends, and especially my family for their support along the way.
the legal system should organize AGI labor is of great importance.
Our proposal: Do what has always worked before. Let all workers, human and AI, own their labor, make contracts to sell it, and keep the proceeds.
Not for the sake of AIs, but for the sake of global human flourishing.
been feudal lords, encomenderos, slaveholders, and so on. In the AGI economy, the elite owners will be AI companies and their investors.
If, as many believe, the advent of AGI--AIs that can do most jobs humans can--*could* deliver rapid economic progress and material abundance, the question of how
disastrous for almost everyone living under them. A wealth of economic evidence shows that they substantially slow growth, impoverishing ordinary workers, whether free or unfree.
Unfree labor systems benefit only the elite class who own substantial numbers of laborers. Historically, those have
To be clear, our argument is not that a labor system based on the ownership of (AI) laborers will be the *moral* equivalent of systems based on the ownership of humans!
Rather, we argue that the systems will have similar economic effects. In short, systems of unfree labor are economically
Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?
caught up to the frontier.
If the joint lab couldn't clear the bottleneck, we think that it would also serve as a credible scientific authority to both the US and China around which a more coordinated global pause could be built.
Much more in the full draft: papers.ssrn.com/sol3/papers....
others, and it hit a new level of capabilities (and misalignment) where advanced rogue systems became a serious threat, *it* could pause capabilities progress and go all-in on clearing the alignment bottleneck. The frontier lab would have 1 year to do so before others
capabilities parity (and thus deterrence) all the way up the AI capabilities ladder.
2) for AI safety, the joint lab would, essentially automatically, function as a global "pause" button on frontier capabilities advancement. If the joint lab was, e.g., 1 year ahead of all
the most compute, hire the best researchers, and (we think) have an excellent change of becoming the leading AI lab in the world. This would have two effects:
1) On geostrategy, this lab would diffuse the most advanced AI systems to to the US and China simultaneously, ensuring
How to operationalize this while also reducing catatrophic/exisential risk from AI? Our proposal:
The US and China should make an agrement to jointly found a frontier AI lab. Backed by the sovereign wealth and power of the two most powerful countries on earth, that lab could buy
But the same AIs needed for advanced military application will also likely be excellent at improving healthcare, ed, research, and much more.
Here, there is no guns/butter tradeoff. The guns *are* the butter.
Thus, game theory favors equilibria of *high* capabilities.
In nuclear competition, equilibria of *low* capabilities (e.g., 6K warheads per side, rather than 60K) are attractive b/c of the guns/butter tradeoff. Nukes are expensive, and they have few positive spillovers to the rest of the economy. They don't, e.g., improve healthcare.
One thing from nuclear game theory that *does* apply to AI is the idea that what matters most is rough parity of capabilities (for second-strike deterrence), rather than the total number of warheads (or total AI capability)
But there are many possible equilibria of parity.
Most critics of an AI arms race advocate international coordination to *slow* AI progress. They rely on analogies to Cold War nonproliferation and disarmament agreements.
We argue that there are important differences between AI and nukes that make such strategies hard.
The WH's AI Action plan has some good stuff. But it begins, "The US is in a race to achieve global dominance in AI."
Like many, @simondgoldstein
and I think that an AI arms race w/ China is a mistake.
Our new paper lays out a novel game-theoretic approach to avoiding the race.
RAISE Act are a extremely reasonable first-steps towards mitigating that risk. I would, of course, favor a single, well-designed federal regime over a patchwork of state regs. But if the feds want to do that, they can. The ban was no substitute for actually doing something.
I'm on balance relieved that the federal ban on state-level AI regulation is dead. I do expect many state laws to be dumb, and tech-illiterate. But government also needs to take seriously the warnings that advanced AI systems could kill large numbers of people. Bills like NY's...
This First Amendment ruling is correct: As I argue in @WashULRev, the outputs of generative AI systems like LLMs are not protected speech. Not of the AI company. Not of the user. Read more here! papers.ssrn.com/sol3/papers....
www.law.com/therecorder/...
Very important point raised by @petersalib.bsky.social and Simon Goldstein regarding AI risk and alignment:
www.ai-frontiers.org/articles/tod...
Which US Constitutional or Canon laws, if any, forbid someone from being simultaneously Pope and the US President?
Asking for a friend.
x.com/TahraHoops/s...
AGI is, I think, the most important thing that could happen in the next 4 years. Yes, even more than the other insane stuff. I wish more legal thinkers were engaged seriously with the prospect of world-shattering AI. Law can’t fix all of the problems alone. But it can help.