h/t
Posts by Peter Henderson
if you try to get Claude to speak Armenian it just outputs "delays"!
Seems like glitch tokens are still unresolved.
Interesting (kind of sad?) to see Opus thrown into a loop.
For context, this appears to be referencing Adam Unikowsky's Substack post where Unikowsky used Claude to generate a simulated oral argument: adamunikowsky.substack.com/p/simulating...
From oral arguments yesterday!
Btw, did a bit of a rebranding of the substack. Will endeavor to post more there.
h/t @dbateyko.bsky.social on the Trials & Errors name. Super fitting name for a group whose focus is both in reinforcement learning and in law/governance research.
www.trialserrors.ai
This is a challenging legal problem for NeurIPS (and other conference participants)! You might be wondering how this is possible given the First Amendment?
I wrote a quick explainer on the current status quo of relevant First Amendment cases & law to get you up to speed.
🔗👇
Even if you're somewhat better off. People also shouldn't have to work themselves to the detriment of their health and families to shield against future labor impacts.
They should be able to trust that their government will think ahead and make good policy.
Single parents working three jobs to make ends meet cannot possibly work harder to accumulate capital. They already work hard enough as it is. People in this position should not be "left behind." There should be no "permanent underclass,” as many are worried about.
I feel this urgency too. But this is all so utterly avoidable with good policymaking.
No one should be left behind because they didn't accumulate capital in 2026. There are so many people who aren't plugged into these conversations or are simply not in a position to do anything about it.
As an aside, my PhD thesis was titled ‘Aligning law, policy, and machine learning for responsible real-world deployments’ for a reason. I think this is a very important area, and I’m excited to see so many excellent researchers working together to move it forward.
I’m really excited about our new paper! I think we will ultimately need to draw on expertise from both law and AI to get alignment right, and this paper lays out that vision in more detail.
arxiv.org/abs/2601.04175
💯!
The current direction of AI labs is “we’re building something that’s going to replace you and we have no plan to make sure you’re going to land in a better place, but we’ll make billions.”
The logical reaction is, “shut it down.” Labs need to get serious on addressing labor impacts.
Many legal scholars talk about lock-in effects for LLMs from the conversation history/memories (akin to social media). But if an LLM can access the info & is capable, you can just ask the LLM to give it to you, making it far easier to switch providers than social media. Good example of that here.
Only a couple of days after my last post, vibe hacking in full force.
Only a couple of days after my last post, vibe hacking in full force.
www.bloomberg.com/news/article...
Following by a panel on GenAI, Agentic AI, Law, and CS (1:15-2:00pm ET) with @peterhenderson.bsky.social (Princeton) and Georgios Piliouras (Google DeepMind)
Spotlight Talks (2:30pm-4:00pm) by
@aloni-bologna.bsky.social (UChicago), Rebecca Wexler (Columbia), and @jubaz.bsky.social (Georgia Tech)
Unfortunately, the scale of the problem makes it challenging. Even if we freeze at Codex-5.3/Opus-4.6-level capabilities attackers can probably scaffold them to pretty easily identify tons of vulnerabilities.
As models discover more exploits, we may need something like a responsible disclosure period for major jumps in cyber capabilities. Before release, trusted defenders get privileged access to the more capable model. Together, they scan for vulnerabilities at scale & notify as many as possible.
Missing from the headline: "using Claude Code."
Vibe hacking is already a thing. I've been saying this for a while, but no model-level safeguards will prevent it entirely. What they can do is slow it down enough for us to put societal-level safeguards in place.
www.popsci.com/technology/r...
That was fast.
New copyright law "hypothetical" just dropped.
Warner Music and Udio settle their copyright case, agree to collaborate on "new song creation service that will allow users to remix tunes by established artists." Expect more such settlements as copyright holders look to leverage AI to boost revenue!
We’ve been pushing hard on AI for public good. One example: partnering with Courtlistener to launch accessible legal semantic search! Many more cool AI projects coming soon from my group aimed at improving access to justice, often spearheaded by @dominsta.bsky.social !
Sora2 is speedrunning my AI law class. We covered issues with copyrighted characters in week 2, and right of publicity claims in week 3. Georgia has a postmortem right of publicity claim. Some states don't (e.g., famous Marilyn Monroe estate battle).
How Gemini Compute Use Agent feels about the "Choose Chrome" popup.
gemini.browserbase.com
Why might AI companies take on larger copyright litigation risks? If they estimate AGI-scale impacts are 2-3 yrs out, litigation will lag that long. By then, the bet might be: govts step in (too big to fail), rightsholders reliant on AI, fair use prevails, or have $$$ to settle.