Advertisement · 728 × 90

Posts by Danny Wilf-Townsend

This is tangential, but I wonder how the idea / expression distinction will hold up over time with generated AI. Expressing an idea used to be costly, typically taking effort and often some creativity. What happens when ideas serve as prompts and expression is cheap or free, and easily iterated?

16 hours ago 0 0 0 0
This morning, I became the first New York City Mayor to visit Housing Court. And what I saw will stay with me for a long time. Families on the brink of losing their homes. Tenants navigating unsafe conditions, harassment, and uncertainty, searching for justice in an overwhelming system. Small property owners trying to keep up with their mortgage payments.
I met with Chief Administrative Judge Joseph Zayas, New York City Administrative Judge Shahabuddeen Ally, and other members of the bench, and I walked through Resolution, HP, and NYCHA Parts. I spoke with people in intake, with legal service providers, and with the advocates who show up every day to stand beside New Yorkers who need support.
Housing court is where the promises we make about dignity, stability and public excellence are tested in real time.
In the months ahead, my team will work closely with the Chief Judge and the Chief
Administrative Judge to confront the concerns we heard - directly from judges, tenants, landlords, legal service providers, and advocates.

This morning, I became the first New York City Mayor to visit Housing Court. And what I saw will stay with me for a long time. Families on the brink of losing their homes. Tenants navigating unsafe conditions, harassment, and uncertainty, searching for justice in an overwhelming system. Small property owners trying to keep up with their mortgage payments. I met with Chief Administrative Judge Joseph Zayas, New York City Administrative Judge Shahabuddeen Ally, and other members of the bench, and I walked through Resolution, HP, and NYCHA Parts. I spoke with people in intake, with legal service providers, and with the advocates who show up every day to stand beside New Yorkers who need support. Housing court is where the promises we make about dignity, stability and public excellence are tested in real time. In the months ahead, my team will work closely with the Chief Judge and the Chief Administrative Judge to confront the concerns we heard - directly from judges, tenants, landlords, legal service providers, and advocates.

1 week ago 2127 286 20 36

disagree with, I'd be curious. Part of the background here is recurring conversations with accomplished, ethical lawyers who have integrated AI into their work with strong quality controls and found big productivity boosts. So it seems clearly possible, but hard to get a sense of the proportions.

2 months ago 0 0 0 0

... likely inconsistent with the hypothesis that most use is irresponsible (again, in the colloquial sense of responsibility, as in inattentive to downsides, no quality control, etc.). Of course, it's an argument about low visibility, so it's hard to be certain! But if there's something specific you

2 months ago 0 0 1 0

Thanks for reading. The argument is that what we do know is (1) tens of thousands of lawyers use AI multiple times a day; and (2) a majority of clients are asking for it to be used. That, to me, is consistent with lots of use that people find productive and not causing problems. And it seems ...

2 months ago 0 0 1 0
Preview
The low visibility of lawyers' responsible AI use There are incentives to keep responsible use quiet, while irresponsible use makes headlines. That's a problem for figuring out the right regulatory balance.

In case you're interested, I wrote up some relevant thoughts about the low visibility we have into lawyers' responsible AI use here: www.wilftownsend.net/p/the-low-vi...

2 months ago 1 0 1 0
Preview
The low visibility of lawyers' responsible AI use There are incentives to keep responsible use quiet, while irresponsible use makes headlines. That's a problem for figuring out the right regulatory balance.

.. we have strong reasons to think there are plenty of lawyers using AI in ways that are not obviously irresponsible. That low visibility is a problem, because we need a more robust shared sense of costs and benefits to get the regulatory balance right www.wilftownsend.net/p/the-low-vi...

2 months ago 0 0 0 0
Advertisement

New post up about what I think is an under-discussed problem: the low visibility that we collectively have into responsible AI use by legal professionals. We have lots of data points about hallucinations, but many fewer public discussions of responsible uses, even though ...

2 months ago 0 0 1 0

Thomson Reuters link: thomsonreuters.com/content/dam/...
Bloomberg link: aboutblaw.com/bjbL

2 months ago 1 0 0 0
Post image

Of course, that doesn't tell you what proportion of lawyers are "current users." Last spring, Bloomberg found that a majority of lawyers "have used" generative AI for work, which is at least something, but isn't quite the same as "currently use."

2 months ago 2 0 1 0
Post image

A periodic update about the frequency and intensity of AI use in legal practice: Thomson Reuters reports that 55% of generative AI users at law firms use it at least daily, with 30% multiple times a day:

2 months ago 1 0 1 0
Preview
Legal AI Revolution Moves Ahead at Measured Pace, Survey Says Adoption of artificial intelligence isn’t transforming the day-to-day practices of many lawyers—at least not yet, according to Bloomberg Law’s new State of Practice survey.

That all makes sense; thanks for responding! My sense is that AI use in the legal profession is often in the shadows, except for hallucinations and sanctions in courts. Some recent-ish data suggests about 20% of private sector lawyers are using it daily news.bloomberglaw.com/legal-ops-an...

2 months ago 1 0 1 0

Do you have a sense, from informal conversations or otherwise, of how many lawyers are using the technology in briefs before the court that do not result in hallucinations or other defects?

2 months ago 1 0 1 0

Happy to see a cameo here from one of my favorite tests in all of the law: whether a procedural rule is really a procedural rule depends on whether it "really regulates procedure."

3 months ago 3 0 0 0

Imagine, there are still people who think the United States should switch to the metric system and abandon common sense units of measurement like this

3 months ago 21 5 2 0
Advertisement

some good climate/energy news:

* 96% of new US power capacity was carbon-free in 2024 (56 gigawatts!)

* 2025 included the first month ever when 51% of power on the U.S. grid was carbon-free

* The golbal trend is overwhelming: The world is now investing more $ in clean energy than fossil fuels

3 months ago 1151 424 13 14

…points to analysis but give some for each of those other things too. And it gets tweaked for different issues, eg for some issues spotting it is the real challenge, and so more points go there than with other issues.

3 months ago 1 0 1 0

No idea if this is the best way, but I just have a detailed rubric where I have specific points for each of those things - eg, a point for spotting the issue, a point for articulating a rule, up to three points for analysis, a point for supporting with relevant authority, etc. I aim to give most…

3 months ago 1 0 1 0

There were only seven of them!

3 months ago 1 0 0 0

Look, it was certainly a *memorable* book.

3 months ago 1 0 0 0

I feel like this article is Michael Orthofer erasure

3 months ago 1 0 2 0
Preview
When AI Models Can Continually Learn, Will Our Regulations Be Able to Keep Up? Regulation has already been hard enough for static AI models.

I have a new post out in @lawfaremedia.org today about continual learning, the goal of many AI developers to build tools that can learn from their users. That technology could have many uses, but also will challenge existing ways we are trying to regulate AI. www.lawfaremedia.org/article/when...

4 months ago 4 1 1 0

For folks who are new to the idea of continual learning, I would recommend this by
@dwarkesh.skystack.xyz
dwarkesh.com/p/timelines-... and this post by
@binarybits.bsky.social understandingai.org/p/context-ro..., which highlight limitations on current AI models from their inability to learn.

4 months ago 1 0 0 0
Post image

For people who are pretty current with conversations about regulating AI, these two paragraphs are the crux of the post. This is responding in part to writing by @deanwb.bsky.social, Ketan Ramakrishnan, and @milesbrundage.bsky.social on entity-based paradigms for AI regulation.

4 months ago 2 0 1 0
Preview
When AI Models Can Continually Learn, Will Our Regulations Be Able to Keep Up? Regulation has already been hard enough for static AI models.

I have a new post out in @lawfaremedia.org today about continual learning, the goal of many AI developers to build tools that can learn from their users. That technology could have many uses, but also will challenge existing ways we are trying to regulate AI. www.lawfaremedia.org/article/when...

4 months ago 4 1 1 0
Advertisement
Preview
When AI Models Can Continually Learn, Will Our Regulations Be Able to Keep Up? Regulation has already been hard enough for static AI models.

Regulating static AI models is already difficult and if AI tools will become ones that can learn, regulations will need to adapt quickly. @dannywt.bsky.social explores what new regulatory approaches could look like in a future where change is common and comes fast.

4 months ago 12 6 4 3

A thoughtful thread on the Netflix / Warner Bros merger. I think the points about consumer preferences are particularly important — it’s sometimes hard, but often important, to tease apart when law and policy arguments are inflected by different preferences about product features

4 months ago 1 0 0 0

An update for Sonnet 4.5, released last week: it scored 60.2% on my final exam (with extended thinking on, 54.4% without it). That's a big step up (~20 percentage points) from Opus 4.1's scores, and puts Sonnet 4.5 close to, if slightly behind, other lead models. On a human curve, that's ~ an A-/B+

6 months ago 3 0 0 0