Advertisement · 728 × 90

Posts by Peter N. Salib

Post image

Very pleased to share that, per Solum's Legal Theory Blog, "How to Count AIs: Individuation and Liability for AI Agents," is officially "Highly Recommended."

Do what the man says, and "Download while it's hot!"

legaltheoryblog.com/2026/03/02/a...

papers.ssrn.com/sol3/papers....

1 month ago 7 2 0 0
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5353214

t.co/A2zWYnbPok

2 months ago 0 0 0 0
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167

t.co/WfbmRGODfA

2 months ago 1 0 1 0
Post image Post image

If Moltbook is freaking you out, I'd encourage you to think: What will mean if AIs turn out to be genuine agents with their own weird goals? How should society be organized to include such agents? What legal structures will most benefit humanity? Some thoughts:

2 months ago 1 1 8 3

📢 CLAIR Writers’ Retreat

Join us for a four-day retreat in the Texas Hill Country to advance Law & AI Safety research.

Feb 20–23, 2026. Mayan Ranch, TX. Lodging, meals + $1,000 honorarium. More details in 🧵

Apply by 12/12 — forms.gle/5rnPi8CLxJkd...

4 months ago 1 1 0 0

Thanks, Lior!

5 months ago 0 0 0 0

Pleased to share that, last week, the UH Law Faculty voted to grant me tenure.

I’m grateful to my colleagues across the academy, my mentors, friends, and especially my family for their support along the way.

5 months ago 19 2 3 0
Advertisement
AI Rights for Human Flourishing <div> <br> </div>AI companies are racing to create Artificial General Intelligence (AGI): AI systems that outperform humans at most economically valuable work.

papers.ssrn.com/sol3/papers....

8 months ago 0 0 0 0
AI Rights for Human Flourishing <div> <br> </div>AI companies are racing to create Artificial General Intelligence (AGI): AI systems that outperform humans at most economically valuable work.

the legal system should organize AGI labor is of great importance.

Our proposal: Do what has always worked before. Let all workers, human and AI, own their labor, make contracts to sell it, and keep the proceeds.

Not for the sake of AIs, but for the sake of global human flourishing.

8 months ago 1 0 1 1

been feudal lords, encomenderos, slaveholders, and so on. In the AGI economy, the elite owners will be AI companies and their investors.

If, as many believe, the advent of AGI--AIs that can do most jobs humans can--*could* deliver rapid economic progress and material abundance, the question of how

8 months ago 0 0 1 0

disastrous for almost everyone living under them. A wealth of economic evidence shows that they substantially slow growth, impoverishing ordinary workers, whether free or unfree.

Unfree labor systems benefit only the elite class who own substantial numbers of laborers. Historically, those have

8 months ago 0 0 1 0

To be clear, our argument is not that a labor system based on the ownership of (AI) laborers will be the *moral* equivalent of systems based on the ownership of humans!

Rather, we argue that the systems will have similar economic effects. In short, systems of unfree labor are economically

8 months ago 0 0 1 0
Post image

Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?

8 months ago 5 0 3 0
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5369439

caught up to the frontier.

If the joint lab couldn't clear the bottleneck, we think that it would also serve as a credible scientific authority to both the US and China around which a more coordinated global pause could be built.

Much more in the full draft: papers.ssrn.com/sol3/papers....

8 months ago 0 0 0 0

others, and it hit a new level of capabilities (and misalignment) where advanced rogue systems became a serious threat, *it* could pause capabilities progress and go all-in on clearing the alignment bottleneck. The frontier lab would have 1 year to do so before others

8 months ago 1 0 1 0

capabilities parity (and thus deterrence) all the way up the AI capabilities ladder.

2) for AI safety, the joint lab would, essentially automatically, function as a global "pause" button on frontier capabilities advancement. If the joint lab was, e.g., 1 year ahead of all

8 months ago 0 0 1 0
Advertisement

the most compute, hire the best researchers, and (we think) have an excellent change of becoming the leading AI lab in the world. This would have two effects:

1) On geostrategy, this lab would diffuse the most advanced AI systems to to the US and China simultaneously, ensuring

8 months ago 0 0 1 0

How to operationalize this while also reducing catatrophic/exisential risk from AI? Our proposal:

The US and China should make an agrement to jointly found a frontier AI lab. Backed by the sovereign wealth and power of the two most powerful countries on earth, that lab could buy

8 months ago 0 0 1 0

But the same AIs needed for advanced military application will also likely be excellent at improving healthcare, ed, research, and much more.

Here, there is no guns/butter tradeoff. The guns *are* the butter.

Thus, game theory favors equilibria of *high* capabilities.

8 months ago 1 0 1 0

In nuclear competition, equilibria of *low* capabilities (e.g., 6K warheads per side, rather than 60K) are attractive b/c of the guns/butter tradeoff. Nukes are expensive, and they have few positive spillovers to the rest of the economy. They don't, e.g., improve healthcare.

8 months ago 0 0 1 0

One thing from nuclear game theory that *does* apply to AI is the idea that what matters most is rough parity of capabilities (for second-strike deterrence), rather than the total number of warheads (or total AI capability)

But there are many possible equilibria of parity.

8 months ago 1 0 1 0

Most critics of an AI arms race advocate international coordination to *slow* AI progress. They rely on analogies to Cold War nonproliferation and disarmament agreements.

We argue that there are important differences between AI and nukes that make such strategies hard.

8 months ago 0 0 1 0
Post image

The WH's AI Action plan has some good stuff. But it begins, "The US is in a race to achieve global dominance in AI."

Like many, @simondgoldstein
and I think that an AI arms race w/ China is a mistake.

Our new paper lays out a novel game-theoretic approach to avoiding the race.

8 months ago 1 0 1 0

RAISE Act are a extremely reasonable first-steps towards mitigating that risk. I would, of course, favor a single, well-designed federal regime over a patchwork of state regs. But if the feds want to do that, they can. The ban was no substitute for actually doing something.

9 months ago 0 0 0 0

I'm on balance relieved that the federal ban on state-level AI regulation is dead. I do expect many state laws to be dumb, and tech-illiterate. But government also needs to take seriously the warnings that advanced AI systems could kill large numbers of people. Bills like NY's...

9 months ago 0 0 1 0
Preview
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights The judge's order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market," said attorney Meetali Jain of the Tech Justice Law Project.

This First Amendment ruling is correct: As I argue in @WashULRev, the outputs of generative AI systems like LLMs are not protected speech. Not of the AI company. Not of the user. Read more here! papers.ssrn.com/sol3/papers....

www.law.com/therecorder/...

10 months ago 1 0 0 0
Advertisement
Post image

Very important point raised by @petersalib.bsky.social and Simon Goldstein regarding AI risk and alignment:

www.ai-frontiers.org/articles/tod...

11 months ago 5 2 0 0
Preview
Tahra Hoops on X: "New Pope is abundance-pilled https://t.co/jnSFxcmNR3" / X New Pope is abundance-pilled https://t.co/jnSFxcmNR3

Which US Constitutional or Canon laws, if any, forbid someone from being simultaneously Pope and the US President?

Asking for a friend.

x.com/TahraHoops/s...

11 months ago 2 0 0 0

AGI is, I think, the most important thing that could happen in the next 4 years. Yes, even more than the other insane stuff. I wish more legal thinkers were engaged seriously with the prospect of world-shattering AI. Law can’t fix all of the problems alone. But it can help.

1 year ago 7 2 1 0
AI Rights for Human Safety <div> AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independ

papers.ssrn.com/sol3/papers....

1 year ago 0 1 0 0