The lawsuit challenging Grammarly's product which allows users to edit text “in the style” of identifiable journalists and scholars illustrates not need for new laws but the underappreciated breadth and aplicability of current law to generative AI, writes @profrothman.bsky.social.
Posts by Suffolk LIT Lab
I need to read this more deeply but some interesting insights on how AI use affects the ol' noggin and how to plan ways to incorporate AI more safely, for lack of a better description.
This is how open source won—innovation without mandatory permission created space to *build* something better, which changed minds.
People envisioning a better future around AI are going to have to Do That Future.
Text Shot: For the past two years, enterprises evaluating open-weight models have faced an awkward trade-off. Google's Gemma line consistently delivered strong performance, but its custom license — with usage restrictions and terms Google could update at will — pushed many teams toward Mistral or Alibaba's Qwen instead. Legal review added friction. Compliance teams flagged edge cases. And capable as Gemma 3 was, "open" with asterisks isn't the same as open. Gemma 4 eliminates that friction entirely. Google DeepMind's newest open model family ships under a standard Apache 2.0 license — the same permissive terms used by Qwen, Mistral, Arcee, and most of the open-weight ecosystem.
Google-releases-gemma-4-under-apache-2-0-and-that-license-change-may-matter venturebeat.com/technology/goo… #AI #OpenSource
As someone who helps overwhelmed and overloaded lawyers get their workflows and processes dialed-in, This whole bad-AI-use epidemic just kills me because it is so totally avoidable.
A few thoughts...
🧵
We at the @suffolklitlab.org are pleased to announce the world's first fully immersive court forms experience. Screen shot of actual VR experience pictured in preview image below. Be sure to click the Learn More button to get the full experience (no goggles required).
BREAKING: Anthropic wins an injunction barring federal agencies from carrying out Trump’s directive. storage.courtlistener.com/recap/gov.us...
CourtListener is run by a small and scrappy nonprofit that can’t do everything right away. But it’s open source, so if you want something bad enough, you can just build it! If it‘s useful for others, you can submit your code, and if it passes muster, it’ll get added. How cool is that?
💯
"Plenty of people are sharing it. And it might be one of the most unintentionally revealing demonstrations of AI’s actual problems that a politician has ever produced — just not in the way Sanders thinks." TL;DR: beware broad strokes + facts matter.
Agreed. See e.g., thefinitescroll.org an open-source client-side algorithmically-driven RSS reader that lives w/ your data on your device, built @suffolklitlab.org
Happy Gideon Day! Thank you to all of the public defenders in the trenches who fight every day to bend the arc of the law a little more toward justice.
Guys, this subnautica case is incredible. I’m reading now and will clip here.
I know nothing about the issues - all I know about the game is from the opinion.
Ok, bigger company buys video game studio, and promises founders a bonus for their new game.
/1
courts.delaware.gov/Opinions/Dow...
Fantastic news in the UK today - the government has apparently ditched its plan to force creatives to 'opt out' if they don't want AI companies training on their life's work.
www.thetimes.com/uk/technolog...
🧵 1/4
The CEO of Krafton used ChatGPT to push out the head of the studio developing Subnautica 2 against the advice of his own legal team and failed miserably.
What comes next with open models.
The futility of chasing the frontier, the change from weights to systems, the reasons no good business model exists, and how to change from a few weights to a dynamics ecosystem.
A long read on my vision for the future.
www.interconnects.ai/p/the-next-p...
Legal writers: I've posted to SSRN a short user's guide about how to use the Zotero citation manager and its Word plugin to automatically generate Bluebook citations that are 95%* compliant. The paper explains which fields to use for which type of source, and examples of generated citations.
🤖 In policy? Thinking about the definition of "AI"?
Led by @aspendigital.bsky.social, a set of us at intersection of AI/ethics/law/policy put together this resource on the lineage of policy "AI" definitions, what they're getting right, what might be improved.
www.aspendigital.org/report/defin...
Somedays I think that more attorneys should start their own firms and then I realize that there are litigators who have never made their own table of authorities.
(This is not shade! Just ruminatinating on how clinic practice is good preparation for small firm life.)
A bottle of Caesar salad dressing stabbed with a knife, captioned “Et tu, Brute?”—a visual pun referencing Julius Caesar’s betrayal.
🥗🔪 Beware the Ides of March! Early bird pricing ends for #LITCon2026 on the 15th. Act now to join us in Boston or virtually for @suffolklitlab.org’s annual legal innovation & tech conference on April 13. suffolklitlab.org/events/lit-c...
“corporations have been maximizing paperclips since before they invaded India” is going right up next to “we have killer robots already, but they’re cars which makes the ‘killer’ part invisible to most Americans”
Interesting to hear the USAO say this when the DOJ AI inventory lists both LexisNexis AI and Westlaw AI as being deployed for "address[ing] manual process of conducting legal research."
bsky.app/profile/rand...
We don't need the Bernstein cases to conclude that models and outputs are protected speech. The Hurley / Tornillo / Moody line of cases say that curating and disseminating expression (even via algorithms) is a protected editorial activity.
That's precisely what model devs (like Anthropic) do.
And we finally have a lawsuit for this Fall's AI & the Law class. I chose this link because it actually linked to the filing.¹ I'll do one better and link to the docket.²
¹ Complaint: storage.courtlistener.com/recap/gov.us...
² Docket: www.courtlistener.com/docket/72379...
"crisis machines" by @jmiers230.bsky.social at #ilwip
chatbot suicides are making headlines. while tragic, coverage is sensationalization & misses important context
the regulatory impulse: we must do something! not about the suicide but about the technology "causing" it
This is a long and interesting comment thread on the OpenAI UPL case. I’ve linked way down a thread to pull one interesting line. All of it is worth a read.
I knew working on suicide research was going to be heavy.
But what's actually starting to get to me is reading all of these chatbot suicide laws and legislative proposals that call for measures that have been shown to exacerbate crisis.
www.governor.ny.gov/sites/defaul...