Advertisement · 728 × 90

Posts by Sam Adler

It was a pleasure to discuss Immigration Enforcement Intermediaries w/ @justinhendrix.bsky.social. If you want to keep abreast with these issues, make sure to follow @sambiddle.bsky.social, @josephcox.bsky.social, @jmbooyah.bsky.social, @georgetownprivacy.bsky.social & @techpolicypress.bsky.social.

2 days ago 11 2 1 0
Preview
S.T.O.P. - The Surveillance Technology Oversight Project

BREAKING: S.T.O.P. Condemns $12M ICE AI Surveillance Contract

"ICE’s so-called Project SAFE HAVEN is a danger to everyone." - S.T.O.P. Executive Director Michelle Dahl #ICE #surveillance
www.stopspying.org/content-inpu...

6 days ago 5 2 1 1
Post image

“Immigration Enforcement Intermediaries” is forthcoming in the BYU Law Review! @chinmayisharma.bsky.social and I describe how the rise of a vendor-mediated enforcement apparatus disrupts immigration federalism and circumvents democratic accountability. papers.ssrn.com/sol3/papers....

2 weeks ago 9 3 1 1
Post image

So excited to see my article with Brenda Dvoskin — “Safe Sex in the Age of Big Tech Feminism” — published this week in the Harvard Journal of Law & Technology! papers.ssrn.com/sol3/papers....

1 month ago 4 1 1 0
Preview
Hacked data shines light on homeland security’s AI surveillance ambitions Records show DHS tech incubator spending large sums on partnerships that would expand surveillance capabilities

New from me today - DHS hack shows funding for AI surveillance including automated surveillance in airports; adapters allowing agents to use phones for biometric scanning; and an AI platform that ingests all 911 call data nationally www.theguardian.com/us-news/2026...

1 month ago 158 119 3 11
Preview
ICE agents reveal daily arrest quotas and surveillance app in rare court testimony Under oath, officers said they were told to make eight arrests a day and given special tech to help choose ‘targets’

When a federal judge (finally) put ICE officers under oath, they admitted that they are given daily detention quotas and rely heavily on a Palantir-supplied AI tool to select targets, without warrants and without enough evidence to obtain one. They simply go into neighborhoods and round people up.

1 month ago 10418 6211 217 415
Preview
The Military’s Use of AI, Explained The risks to soldiers and civilians are mounting as the Pentagon races to adopt the latest commercial advances in artificial intelligence.

Congress should ensure that the Pentagon explains not just how it uses AI but also how much it’s spending on the technology, as well as known risks and failures of the systems it acquires. bit.ly/40rNSg4

1 month ago 77 28 2 0
Preview
The Business of Military AI The Pentagon has been spending tens of billions of dollars to adopt new technologies at breakneck speed. Without oversight and safeguards, military applications of artificial intelligence could jeopar...

As @amostoh.bsky.social and I explain in our new report, the military has been ramping up its adoption of AI, while oversight and safeguards have failed to keep up.

But the Pentagon’s dispute with Anthropic has brought a grave threat into focus: using AI to pry into Americans’ private lives 🧵 1/

1 month ago 36 23 1 1
Preview
Inside the dangerous and shady business of data brokers Even if you don't know data brokers, they almost certainly know you. With no nationwide U.S. privacy laws, experts warn there are often minimal safeguards against motivated people exploiting them for ...

Excellent & empathetic reporting by @iododds.bsky.social in @the-independent.com about data brokers & interpersonal abuse, with smart quotes from @samadler.bsky.social & mentions of our forthcoming @califlrev.bsky.social article w/ @chinmayisharma.bsky.social. www.the-independent.com/news/world/a...

1 month ago 4 3 1 0
Advertisement
As deepfake technology becomes increasingly sophisticated and accessible, American lawmakers are responding with a flurry of urgent legislative action to address its potential harms. Our 50-state survey of proposed and enacted deepfake legislation reveals a complex regulatory landscape in which jurisdictions are adopting a range of legal approaches, including criminal punishments, civil remedies, or a combination of methods. We also find that legislators are frequently turning to tort-law frameworks to address the harms of deepfakes. This article explores the current landscape of tort-based regulations of deepfakes. In addition to providing an overview of the most recent legislative developments, we unpack and compare the various tort-law methods arising at the state and federal level. We further consider how lawmakers are modifying existing tort laws to address the unique concerns raised by deepfakes.
While individualistic tort remedies allow victims of deepfakes to seek direct recourse through familiar private rights of action, our analysis also identifies practical and conceptual limitations with this approach. Traditional tort frameworks struggle to address key challenges posed by deepfakes, including anonymous creation, viral distribution at technological scale, and harms affecting both individuals and society broadly.
In light of these limitations, legislators are innovatively adapting traditional tort concepts—such as standing, mental states, causation, immunities, and remedies—to address deepfakes’ unique characteristics. Yet the very need for these adaptations reveals some of tort law’s shortcomings and suggests a space for complementary regulatory approaches. We consider some potential approaches that could provide this more complete framework, like tort liability for entities that enable deepfake creation and circulation, and civil enforcement mechanisms that empower state actors to vindicate both individual and societal interests. Ultimately, our finding…

As deepfake technology becomes increasingly sophisticated and accessible, American lawmakers are responding with a flurry of urgent legislative action to address its potential harms. Our 50-state survey of proposed and enacted deepfake legislation reveals a complex regulatory landscape in which jurisdictions are adopting a range of legal approaches, including criminal punishments, civil remedies, or a combination of methods. We also find that legislators are frequently turning to tort-law frameworks to address the harms of deepfakes. This article explores the current landscape of tort-based regulations of deepfakes. In addition to providing an overview of the most recent legislative developments, we unpack and compare the various tort-law methods arising at the state and federal level. We further consider how lawmakers are modifying existing tort laws to address the unique concerns raised by deepfakes. While individualistic tort remedies allow victims of deepfakes to seek direct recourse through familiar private rights of action, our analysis also identifies practical and conceptual limitations with this approach. Traditional tort frameworks struggle to address key challenges posed by deepfakes, including anonymous creation, viral distribution at technological scale, and harms affecting both individuals and society broadly. In light of these limitations, legislators are innovatively adapting traditional tort concepts—such as standing, mental states, causation, immunities, and remedies—to address deepfakes’ unique characteristics. Yet the very need for these adaptations reveals some of tort law’s shortcomings and suggests a space for complementary regulatory approaches. We consider some potential approaches that could provide this more complete framework, like tort liability for entities that enable deepfake creation and circulation, and civil enforcement mechanisms that empower state actors to vindicate both individual and societal interests. Ultimately, our finding…

My new piece with @sonjawest.bsky.social is live in the Journal of Tort Law!

Our original 50-state survey of 466 deepfake laws reveals a complex landscape in which lawmakers are experimenting with novel criminal, civil & administrative tools to address deepfakes. papers.ssrn.com/sol3/papers....

5 months ago 19 6 0 0
Preview
Google Calls ICE Agents a Vulnerable Group, Removes ICE-Spotting App ‘Red Dot’ The move comes as Apple removed ICEBlock after direct pressure from U.S. Department of Justice officials and signals a broader crackdown on ICE-spotting apps.

New: Google removed an ICE-spotting app after calling ICE agents a vulnerable group. A immigration support group on the ground in Chicago, the current focus of ICE, said they were using the app to source tips, called Red Dot. Apple removed that app too
www.404media.co/google-calls...

6 months ago 286 155 30 32

👀

And don't miss the closing line of their announcement: "If you would like to bring your memory details over from a different AI tool or export your memory from Claude for backup or migration, you can follow these instructions." (link goes to: support.anthropic.com/en/articles/... )

7 months ago 3 1 0 0
AI Procurement As Regulatory Reconnaissance Artificial Intelligence ("AI") is a black box technology in a black box industry. Some view AI as a lifechanging technology capable of advancing socie

Excited to share a draft of my Note—AI Procurement as Regulatory Reconnaissance—forthcoming in the Fordham Law Review. Inspired by @cary-coglianese.bsky.social, I contend that federal procurement offers a compelling information-forcing tool to inform AI regulation.

papers.ssrn.com/sol3/papers....

7 months ago 3 2 0 0
Unbundling AI Openness <div> The debate over AI openness—whether to make components of an artificial intelligence system available for public inspection and modification—forces polic

Thrilled to share that Unbundling AI Openness, my article with @alanrozenshtein.com and Parth Nobel is forthcoming in Wisconsin Law Review! It introduces a framework of "differential openness" to correct the oversimplification of AI as either "open vs. closed."

papers.ssrn.com/sol3/papers....

7 months ago 5 3 0 1

If the future of AI is personal, it should also be portable. Important piece from @mchrisriley.com for @techpolicypress.bsky.social

8 months ago 5 1 0 0
Preview
A DOGE AI Tool Called SweetREX Is Coming to Slash US Government Regulation Named for its developer, an undergrad who took leave from UChicago to become a DOGE affiliate, a new AI tool automates the review of federal regulations and flags rules it thinks can be eliminated.

NEW: DOGE affiliate Chris Sweet has developed an AI tool that is being used to rapidly slash government regulations, according to details of a meeting reviewed by @wired.com. Scoop by me: www.wired.com/story/sweetr...

8 months ago 197 107 14 16
Advertisement
Preview
'This Was Trauma by Simulation': ChatGPT Users File Disturbing Mental Health Complaints Gizmodo obtained consumer complaints to FTC through a FOIA request.

I filed a FOIA request with the FTC to get user complaints about ChatGPT.

In one case from Utah, a mother reports her son was experiencing a delusional breakdown and ChatGPT told him to stop taking his medication. The AI bot also told him that his parents were dangerous.

8 months ago 811 356 24 43
Brokered Violence: Safety for Sale in the Free Marketplace of Data In a world where data brokers enable violence by selling our information, safety requires a data-deletion right that people can reliably enforce.&nbsp;

Fordham Law Professor Chinmayi Sharma (‪@chinmayisharma.bsky.social‬) and student Sam Adler '26 (‪@samadler.bsky.social‬) argue data brokers enable violence by selling people's information, and suggest that a data-deletion right should be enabled and enforced. via Lawfare (@lawfaremedia.org)

8 months ago 1 1 0 0

Excited to share a @lawfaremedia.org piece with @thomaskadri.bsky.social and @samadler.bsky.social that builds off our article Brokering Safety, forthcoming in @califlrev.bsky.social, that calls for an overdue conversation about how much we privilege data broker profits over human safety.

8 months ago 39 13 1 0

Appreciate the opportunity to write with @thomaskadri.bsky.social and @chinmayisharma.bsky.social for @lawfaremedia.org to call for a more expansive right to obscurity, building off our article—Brokering Safety—forthcoming in @califlrev.bsky.social

8 months ago 5 1 0 0

A new provocation from me, @samadler.bsky.social & @chinmayisharma.bsky.social to extend the proposal in our forthcoming Calif. L. Rev. (@califlrev.bsky.social) piece by letting *anyone* force data brokers to obscure info through a centralized process. It's time to call the 1st Amendment question!

8 months ago 7 2 1 0
Preview
Brenda Dvoskin and Thomas E. Kadri Receive 2025–2026 Haub Law Emerging Scholar Award in Women, Gender & Law Professors Brenda Dvoskin of Washington University School of Law and Thomas E. Kadri of the University of Georgia School of Law have been selected as the recipients of the 2025–2026 Haub Law Emerging ...

Well, this was a lovely surprise! www.pace.edu/news/brenda-...

8 months ago 31 3 0 1
Post image

The Agentic Executive has arrived . . .

www.governor.virginia.gov/newsroom/new...

9 months ago 1 0 0 0
Bailing Out Biometrics In 2023, hackers breached 23andMe and extracted the biometric and genealogical data of nearly seven million people. By 2025, that data-originally offered up in

23andMe didn’t own your DNA—it was bailed to them. In Bailing Out Biometrics (forthcoming, J. Tort Law), Elijah Gordon & I argue that biometric data deserves bailment protection. Allowing its breach and then selling it in bankruptcy, isn’t just wrong—it’s illegal.

papers.ssrn.com/sol3/papers....

9 months ago 1 1 0 0

data brokers should not exist and it should be embarrassing every day a lawmaker doesnt try to control or destroy them

10 months ago 3 1 0 0
Preview
For Survivors Using Chatbots, ‘Delete’ Doesn’t Always Mean Deleted | TechPolicy.Press Many survivors may assume that AI platforms and chatbots offer common privacy protections, but these are not guaranteed, Belle Torek writes.

After a recent court order, OpenAI is now required to retain the very data many of its users believed to be most private. This introduces serious privacy risks, especially for vulnerable users like victims and survivors of domestic violence, Belle Torek writes.

10 months ago 56 36 2 8
Advertisement
Video

Today's Lawfare Daily is a Fordham Law panel where @qjurecic.bsky.social‬, @josephcox.bsky.social‬, ‪@orlylobel.bsky.social‬, Aziz Huq, and ‪@jtlg.bsky.social‬ discussed the role technology has played a role in supporting or undermining democracy.

10 months ago 17 7 1 1
Preview
The US Is Storing Migrant Children’s DNA in a Criminal Database Customs and Border Protection has swabbed the DNA of migrant children as young as 4, whose genetic data is uploaded to an FBI-run database that can track them if they commit crimes in the future.

Customs and Border Protection has swabbed the DNA of migrant children as young as 4, whose genetic data is uploaded to an FBI-run database that can track them if they commit crimes in the future.

By ‪@wired.com‬:

10 months ago 10 11 0 2
Preview
The Latest Research on Cybersecurity & Data Privacy This list includes a selection of the latest research on cybersecurity & data privacy posted to SSRN in 2025. Understanding the Cyber Risks of Artificial Intelligence: An Ongoing, Comprehensive…

Check out the latest from the SSRN #blog which includes a selection of recent #research on #cybersecurity & data privacy.

Read more: http://spkl.io/63320ffily

#Academicsky #AcademicChatter #dataprivacy

10 months ago 5 3 1 1