Advertisement · 728 × 90

Posts by Suffolk LIT Lab

Preview
Grammarly Lawsuit Shows Existing Laws Can Combat Deepfakes Calls for new deepfake laws overlook the strength—and breadth—of existing legal protections.

The lawsuit challenging Grammarly's product which allows users to edit text “in the style” of identifiable journalists and scholars illustrates not need for new laws but the underappreciated breadth and aplicability of current law to generative AI, writes @profrothman.bsky.social.

15 hours ago 54 18 0 0

I need to read this more deeply but some interesting insights on how AI use affects the ol' noggin and how to plan ways to incorporate AI more safely, for lack of a better description.

17 hours ago 6 3 0 0

This is how open source won—innovation without mandatory permission created space to *build* something better, which changed minds.

People envisioning a better future around AI are going to have to Do That Future.

4 days ago 9 3 2 0
Text Shot: For the past two years, enterprises evaluating open-weight models have faced an awkward trade-off. Google's Gemma line consistently delivered strong performance, but its custom license — with usage restrictions and terms Google could update at will — pushed many teams toward Mistral or Alibaba's Qwen instead. Legal review added friction. Compliance teams flagged edge cases. And capable as Gemma 3 was, "open" with asterisks isn't the same as open.

Gemma 4 eliminates that friction entirely. Google DeepMind's newest open model family ships under a standard Apache 2.0 license — the same permissive terms used by Qwen, Mistral, Arcee, and most of the open-weight ecosystem.

Text Shot: For the past two years, enterprises evaluating open-weight models have faced an awkward trade-off. Google's Gemma line consistently delivered strong performance, but its custom license — with usage restrictions and terms Google could update at will — pushed many teams toward Mistral or Alibaba's Qwen instead. Legal review added friction. Compliance teams flagged edge cases. And capable as Gemma 3 was, "open" with asterisks isn't the same as open. Gemma 4 eliminates that friction entirely. Google DeepMind's newest open model family ships under a standard Apache 2.0 license — the same permissive terms used by Qwen, Mistral, Arcee, and most of the open-weight ecosystem.

Google-releases-gemma-4-under-apache-2-0-and-that-license-change-may-matter venturebeat.com/technology/goo… #AI #OpenSource

6 days ago 5 2 0 0

As someone who helps overwhelmed and overloaded lawyers get their workflows and processes dialed-in, This whole bad-AI-use epidemic just kills me because it is so totally avoidable.

A few thoughts...

🧵

1 week ago 0 2 1 1
Preview
Announcing ALImmersion for Docassemble: complete court forms in 3D virtual space! — Suffolk LIT Lab Experience court forms like never before: navigate court forms in virtual space, with help from the LIT Lab's new AI assistant, Vergil. Learn more!

We at the @suffolklitlab.org are pleased to announce the world's first fully immersive court forms experience. Screen shot of actual VR experience pictured in preview image below. Be sure to click the Learn More button to get the full experience (no goggles required).

1 week ago 5 3 2 1
Post image Post image

BREAKING: Anthropic wins an injunction barring federal agencies from carrying out Trump’s directive. storage.courtlistener.com/recap/gov.us...

2 weeks ago 937 226 13 16

CourtListener is run by a small and scrappy nonprofit that can’t do everything right away. But it’s open source, so if you want something bad enough, you can just build it! If it‘s useful for others, you can submit your code, and if it passes muster, it’ll get added. How cool is that?

2 weeks ago 9 2 1 0
Advertisement
Preview
Wikipedia Bans AI-Generated Content “In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed.”

Wikipedia does what is essentially a full ban on AI content: www.404media.co/wikipedia-ba...

2 weeks ago 4072 1039 27 104

💯

2 weeks ago 53 9 0 1
Preview
a man is making a surprised face while sitting at a table in a restaurant . Alt: nathan fillion gif

Me, reading this lede:

2 weeks ago 4 1 1 0

"Plenty of people are sharing it. And it might be one of the most unintentionally revealing demonstrations of AI’s actual problems that a politician has ever produced — just not in the way Sanders thinks." TL;DR: beware broad strokes + facts matter.

2 weeks ago 3 3 0 0
Preview
Claude Meets Westlaw and Lexis Something remarkable has happened in the last few months, and most of the legal academy has not noticed. Anthropic's Claude—the AI assistant many of us have experimented with for drafting, brainstormi...

Connecting Claude to WEXIS. legaled.ai/claude-meets...

2 weeks ago 5 2 1 1
Preview
The Finite Scroll An open source client-side algorithmically-driven RSS reader, living with your data on your device.

Agreed. See e.g., thefinitescroll.org an open-source client-side algorithmically-driven RSS reader that lives w/ your data on your device, built @suffolklitlab.org

2 weeks ago 3 2 0 0
Post image

Happy Gideon Day! Thank you to all of the public defenders in the trenches who fight every day to bend the arc of the law a little more toward justice.

3 weeks ago 29 10 0 1

Guys, this subnautica case is incredible. I’m reading now and will clip here.

I know nothing about the issues - all I know about the game is from the opinion.

Ok, bigger company buys video game studio, and promises founders a bonus for their new game.
/1

courts.delaware.gov/Opinions/Dow...

3 weeks ago 236 75 13 22
Post image

Fantastic news in the UK today - the government has apparently ditched its plan to force creatives to 'opt out' if they don't want AI companies training on their life's work.

www.thetimes.com/uk/technolog...

🧵 1/4

3 weeks ago 3042 1045 23 126
Advertisement
Preview
CEO Ignores Lawyers, Asks ChatGPT How to Void $250 Million Contract, Loses Terribly in Court The CEO of Krafton used ChatGPT to push out the head of the studio developing Subnautica 2 against the advice of his own legal team and failed miserably.

The CEO of Krafton used ChatGPT to push out the head of the studio developing Subnautica 2 against the advice of his own legal team and failed miserably.

3 weeks ago 600 231 17 25
Preview
What comes next with open models Markets, capabilities, cope, and bewilderment in the industrialization of language models.

What comes next with open models.

The futility of chasing the frontier, the change from weights to systems, the reasons no good business model exists, and how to change from a few weights to a dynamics ecosystem.

A long read on my vision for the future.
www.interconnects.ai/p/the-next-p...

3 weeks ago 61 9 4 6
Automating Bluebook Citations in Legal Scholarship: A User's Guide for Bluebook in Zotero This paper documents an implementation of the Bluebook Law Review citation format using the Zotero citation manager software. It introduces revisions to an exis

Legal writers: I've posted to SSRN a short user's guide about how to use the Zotero citation manager and its Word plugin to automatically generate Bluebook citations that are 95%* compliant. The paper explains which fields to use for which type of source, and examples of generated citations.

3 weeks ago 176 73 17 10
Preview
Defining Technologies of Our Time - Aspen Digital This handbook offers an accessible, easy to use entry point for grappling with the question of how to define AI in a legal context.

🤖 In policy? Thinking about the definition of "AI"?
Led by @aspendigital.bsky.social, a set of us at intersection of AI/ethics/law/policy put together this resource on the lineage of policy "AI" definitions, what they're getting right, what might be improved.
www.aspendigital.org/report/defin...

3 weeks ago 23 5 4 1

Somedays I think that more attorneys should start their own firms and then I realize that there are litigators who have never made their own table of authorities.

(This is not shade! Just ruminatinating on how clinic practice is good preparation for small firm life.)

3 weeks ago 36 4 3 0
A bottle of Caesar salad dressing stabbed with a knife, captioned “Et tu, Brute?”—a visual pun referencing Julius Caesar’s betrayal.

A bottle of Caesar salad dressing stabbed with a knife, captioned “Et tu, Brute?”—a visual pun referencing Julius Caesar’s betrayal.

🥗🔪 Beware the Ides of March! Early bird pricing ends for #LITCon2026 on the 15th. Act now to join us in Boston or virtually for @suffolklitlab.org’s annual legal innovation & tech conference on April 13. suffolklitlab.org/events/lit-c...

4 weeks ago 8 4 1 0

“corporations have been maximizing paperclips since before they invaded India” is going right up next to “we have killer robots already, but they’re cars which makes the ‘killer’ part invisible to most Americans”

4 weeks ago 19 2 0 0

Interesting to hear the USAO say this when the DOJ AI inventory lists both LexisNexis AI and Westlaw AI as being deployed for "address[ing] manual process of conducting legal research."
bsky.app/profile/rand...

4 weeks ago 16 3 1 0

We don't need the Bernstein cases to conclude that models and outputs are protected speech. The Hurley / Tornillo / Moody line of cases say that curating and disseminating expression (even via algorithms) is a protected editorial activity.

That's precisely what model devs (like Anthropic) do.

4 weeks ago 8 3 2 0
Advertisement
Preview
Anthropic sues Trump admin over supply-chain risk label The AI startup has filed federal lawsuits challenging the unprecedented designation, which prevents the company from working with the government and threatens its wider business.

And we finally have a lawsuit for this Fall's AI & the Law class. I chose this link because it actually linked to the filing.¹ I'll do one better and link to the docket.²

¹ Complaint: storage.courtlistener.com/recap/gov.us...

² Docket: www.courtlistener.com/docket/72379...

1 month ago 4 4 1 0

"crisis machines" by @jmiers230.bsky.social at #ilwip

chatbot suicides are making headlines. while tragic, coverage is sensationalization & misses important context

the regulatory impulse: we must do something! not about the suicide but about the technology "causing" it

1 month ago 10 2 2 0

This is a long and interesting comment thread on the OpenAI UPL case. I’ve linked way down a thread to pull one interesting line. All of it is worth a read.

1 month ago 0 0 0 0

I knew working on suicide research was going to be heavy.

But what's actually starting to get to me is reading all of these chatbot suicide laws and legislative proposals that call for measures that have been shown to exacerbate crisis.

www.governor.ny.gov/sites/defaul...

1 month ago 75 21 4 3