This is tangential, but I wonder how the idea / expression distinction will hold up over time with generated AI. Expressing an idea used to be costly, typically taking effort and often some creativity. What happens when ideas serve as prompts and expression is cheap or free, and easily iterated?
Posts by Danny Wilf-Townsend
This morning, I became the first New York City Mayor to visit Housing Court. And what I saw will stay with me for a long time. Families on the brink of losing their homes. Tenants navigating unsafe conditions, harassment, and uncertainty, searching for justice in an overwhelming system. Small property owners trying to keep up with their mortgage payments. I met with Chief Administrative Judge Joseph Zayas, New York City Administrative Judge Shahabuddeen Ally, and other members of the bench, and I walked through Resolution, HP, and NYCHA Parts. I spoke with people in intake, with legal service providers, and with the advocates who show up every day to stand beside New Yorkers who need support. Housing court is where the promises we make about dignity, stability and public excellence are tested in real time. In the months ahead, my team will work closely with the Chief Judge and the Chief Administrative Judge to confront the concerns we heard - directly from judges, tenants, landlords, legal service providers, and advocates.
disagree with, I'd be curious. Part of the background here is recurring conversations with accomplished, ethical lawyers who have integrated AI into their work with strong quality controls and found big productivity boosts. So it seems clearly possible, but hard to get a sense of the proportions.
... likely inconsistent with the hypothesis that most use is irresponsible (again, in the colloquial sense of responsibility, as in inattentive to downsides, no quality control, etc.). Of course, it's an argument about low visibility, so it's hard to be certain! But if there's something specific you
Thanks for reading. The argument is that what we do know is (1) tens of thousands of lawyers use AI multiple times a day; and (2) a majority of clients are asking for it to be used. That, to me, is consistent with lots of use that people find productive and not causing problems. And it seems ...
In case you're interested, I wrote up some relevant thoughts about the low visibility we have into lawyers' responsible AI use here: www.wilftownsend.net/p/the-low-vi...
.. we have strong reasons to think there are plenty of lawyers using AI in ways that are not obviously irresponsible. That low visibility is a problem, because we need a more robust shared sense of costs and benefits to get the regulatory balance right www.wilftownsend.net/p/the-low-vi...
New post up about what I think is an under-discussed problem: the low visibility that we collectively have into responsible AI use by legal professionals. We have lots of data points about hallucinations, but many fewer public discussions of responsible uses, even though ...
Thomson Reuters link: thomsonreuters.com/content/dam/...
Bloomberg link: aboutblaw.com/bjbL
Of course, that doesn't tell you what proportion of lawyers are "current users." Last spring, Bloomberg found that a majority of lawyers "have used" generative AI for work, which is at least something, but isn't quite the same as "currently use."
A periodic update about the frequency and intensity of AI use in legal practice: Thomson Reuters reports that 55% of generative AI users at law firms use it at least daily, with 30% multiple times a day:
That all makes sense; thanks for responding! My sense is that AI use in the legal profession is often in the shadows, except for hallucinations and sanctions in courts. Some recent-ish data suggests about 20% of private sector lawyers are using it daily news.bloomberglaw.com/legal-ops-an...
Do you have a sense, from informal conversations or otherwise, of how many lawyers are using the technology in briefs before the court that do not result in hallucinations or other defects?
Happy to see a cameo here from one of my favorite tests in all of the law: whether a procedural rule is really a procedural rule depends on whether it "really regulates procedure."
Imagine, there are still people who think the United States should switch to the metric system and abandon common sense units of measurement like this
some good climate/energy news:
* 96% of new US power capacity was carbon-free in 2024 (56 gigawatts!)
* 2025 included the first month ever when 51% of power on the U.S. grid was carbon-free
* The golbal trend is overwhelming: The world is now investing more $ in clean energy than fossil fuels
…points to analysis but give some for each of those other things too. And it gets tweaked for different issues, eg for some issues spotting it is the real challenge, and so more points go there than with other issues.
No idea if this is the best way, but I just have a detailed rubric where I have specific points for each of those things - eg, a point for spotting the issue, a point for articulating a rule, up to three points for analysis, a point for supporting with relevant authority, etc. I aim to give most…
There were only seven of them!
Look, it was certainly a *memorable* book.
I feel like this article is Michael Orthofer erasure
I have a new post out in @lawfaremedia.org today about continual learning, the goal of many AI developers to build tools that can learn from their users. That technology could have many uses, but also will challenge existing ways we are trying to regulate AI. www.lawfaremedia.org/article/when...
For folks who are new to the idea of continual learning, I would recommend this by
@dwarkesh.skystack.xyz
dwarkesh.com/p/timelines-... and this post by
@binarybits.bsky.social understandingai.org/p/context-ro..., which highlight limitations on current AI models from their inability to learn.
For people who are pretty current with conversations about regulating AI, these two paragraphs are the crux of the post. This is responding in part to writing by @deanwb.bsky.social, Ketan Ramakrishnan, and @milesbrundage.bsky.social on entity-based paradigms for AI regulation.
I have a new post out in @lawfaremedia.org today about continual learning, the goal of many AI developers to build tools that can learn from their users. That technology could have many uses, but also will challenge existing ways we are trying to regulate AI. www.lawfaremedia.org/article/when...
Regulating static AI models is already difficult and if AI tools will become ones that can learn, regulations will need to adapt quickly. @dannywt.bsky.social explores what new regulatory approaches could look like in a future where change is common and comes fast.
A thoughtful thread on the Netflix / Warner Bros merger. I think the points about consumer preferences are particularly important — it’s sometimes hard, but often important, to tease apart when law and policy arguments are inflected by different preferences about product features
An update for Sonnet 4.5, released last week: it scored 60.2% on my final exam (with extended thinking on, 54.4% without it). That's a big step up (~20 percentage points) from Opus 4.1's scores, and puts Sonnet 4.5 close to, if slightly behind, other lead models. On a human curve, that's ~ an A-/B+