It was a pleasure to discuss Immigration Enforcement Intermediaries w/ @justinhendrix.bsky.social. If you want to keep abreast with these issues, make sure to follow @sambiddle.bsky.social, @josephcox.bsky.social, @jmbooyah.bsky.social, @georgetownprivacy.bsky.social & @techpolicypress.bsky.social.
Posts by Sam Adler
BREAKING: S.T.O.P. Condemns $12M ICE AI Surveillance Contract
"ICE’s so-called Project SAFE HAVEN is a danger to everyone." - S.T.O.P. Executive Director Michelle Dahl #ICE #surveillance
www.stopspying.org/content-inpu...
“Immigration Enforcement Intermediaries” is forthcoming in the BYU Law Review! @chinmayisharma.bsky.social and I describe how the rise of a vendor-mediated enforcement apparatus disrupts immigration federalism and circumvents democratic accountability. papers.ssrn.com/sol3/papers....
So excited to see my article with Brenda Dvoskin — “Safe Sex in the Age of Big Tech Feminism” — published this week in the Harvard Journal of Law & Technology! papers.ssrn.com/sol3/papers....
New from me today - DHS hack shows funding for AI surveillance including automated surveillance in airports; adapters allowing agents to use phones for biometric scanning; and an AI platform that ingests all 911 call data nationally www.theguardian.com/us-news/2026...
When a federal judge (finally) put ICE officers under oath, they admitted that they are given daily detention quotas and rely heavily on a Palantir-supplied AI tool to select targets, without warrants and without enough evidence to obtain one. They simply go into neighborhoods and round people up.
Congress should ensure that the Pentagon explains not just how it uses AI but also how much it’s spending on the technology, as well as known risks and failures of the systems it acquires. bit.ly/40rNSg4
As @amostoh.bsky.social and I explain in our new report, the military has been ramping up its adoption of AI, while oversight and safeguards have failed to keep up.
But the Pentagon’s dispute with Anthropic has brought a grave threat into focus: using AI to pry into Americans’ private lives 🧵 1/
Excellent & empathetic reporting by @iododds.bsky.social in @the-independent.com about data brokers & interpersonal abuse, with smart quotes from @samadler.bsky.social & mentions of our forthcoming @califlrev.bsky.social article w/ @chinmayisharma.bsky.social. www.the-independent.com/news/world/a...
As deepfake technology becomes increasingly sophisticated and accessible, American lawmakers are responding with a flurry of urgent legislative action to address its potential harms. Our 50-state survey of proposed and enacted deepfake legislation reveals a complex regulatory landscape in which jurisdictions are adopting a range of legal approaches, including criminal punishments, civil remedies, or a combination of methods. We also find that legislators are frequently turning to tort-law frameworks to address the harms of deepfakes. This article explores the current landscape of tort-based regulations of deepfakes. In addition to providing an overview of the most recent legislative developments, we unpack and compare the various tort-law methods arising at the state and federal level. We further consider how lawmakers are modifying existing tort laws to address the unique concerns raised by deepfakes. While individualistic tort remedies allow victims of deepfakes to seek direct recourse through familiar private rights of action, our analysis also identifies practical and conceptual limitations with this approach. Traditional tort frameworks struggle to address key challenges posed by deepfakes, including anonymous creation, viral distribution at technological scale, and harms affecting both individuals and society broadly. In light of these limitations, legislators are innovatively adapting traditional tort concepts—such as standing, mental states, causation, immunities, and remedies—to address deepfakes’ unique characteristics. Yet the very need for these adaptations reveals some of tort law’s shortcomings and suggests a space for complementary regulatory approaches. We consider some potential approaches that could provide this more complete framework, like tort liability for entities that enable deepfake creation and circulation, and civil enforcement mechanisms that empower state actors to vindicate both individual and societal interests. Ultimately, our finding…
My new piece with @sonjawest.bsky.social is live in the Journal of Tort Law!
Our original 50-state survey of 466 deepfake laws reveals a complex landscape in which lawmakers are experimenting with novel criminal, civil & administrative tools to address deepfakes. papers.ssrn.com/sol3/papers....
New: Google removed an ICE-spotting app after calling ICE agents a vulnerable group. A immigration support group on the ground in Chicago, the current focus of ICE, said they were using the app to source tips, called Red Dot. Apple removed that app too
www.404media.co/google-calls...
👀
And don't miss the closing line of their announcement: "If you would like to bring your memory details over from a different AI tool or export your memory from Claude for backup or migration, you can follow these instructions." (link goes to: support.anthropic.com/en/articles/... )
Excited to share a draft of my Note—AI Procurement as Regulatory Reconnaissance—forthcoming in the Fordham Law Review. Inspired by @cary-coglianese.bsky.social, I contend that federal procurement offers a compelling information-forcing tool to inform AI regulation.
papers.ssrn.com/sol3/papers....
Thrilled to share that Unbundling AI Openness, my article with @alanrozenshtein.com and Parth Nobel is forthcoming in Wisconsin Law Review! It introduces a framework of "differential openness" to correct the oversimplification of AI as either "open vs. closed."
papers.ssrn.com/sol3/papers....
If the future of AI is personal, it should also be portable. Important piece from @mchrisriley.com for @techpolicypress.bsky.social
NEW: DOGE affiliate Chris Sweet has developed an AI tool that is being used to rapidly slash government regulations, according to details of a meeting reviewed by @wired.com. Scoop by me: www.wired.com/story/sweetr...
I filed a FOIA request with the FTC to get user complaints about ChatGPT.
In one case from Utah, a mother reports her son was experiencing a delusional breakdown and ChatGPT told him to stop taking his medication. The AI bot also told him that his parents were dangerous.
Fordham Law Professor Chinmayi Sharma (@chinmayisharma.bsky.social) and student Sam Adler '26 (@samadler.bsky.social) argue data brokers enable violence by selling people's information, and suggest that a data-deletion right should be enabled and enforced. via Lawfare (@lawfaremedia.org)
Excited to share a @lawfaremedia.org piece with @thomaskadri.bsky.social and @samadler.bsky.social that builds off our article Brokering Safety, forthcoming in @califlrev.bsky.social, that calls for an overdue conversation about how much we privilege data broker profits over human safety.
Appreciate the opportunity to write with @thomaskadri.bsky.social and @chinmayisharma.bsky.social for @lawfaremedia.org to call for a more expansive right to obscurity, building off our article—Brokering Safety—forthcoming in @califlrev.bsky.social
A new provocation from me, @samadler.bsky.social & @chinmayisharma.bsky.social to extend the proposal in our forthcoming Calif. L. Rev. (@califlrev.bsky.social) piece by letting *anyone* force data brokers to obscure info through a centralized process. It's time to call the 1st Amendment question!
The Agentic Executive has arrived . . .
www.governor.virginia.gov/newsroom/new...
23andMe didn’t own your DNA—it was bailed to them. In Bailing Out Biometrics (forthcoming, J. Tort Law), Elijah Gordon & I argue that biometric data deserves bailment protection. Allowing its breach and then selling it in bankruptcy, isn’t just wrong—it’s illegal.
papers.ssrn.com/sol3/papers....
data brokers should not exist and it should be embarrassing every day a lawmaker doesnt try to control or destroy them
After a recent court order, OpenAI is now required to retain the very data many of its users believed to be most private. This introduces serious privacy risks, especially for vulnerable users like victims and survivors of domestic violence, Belle Torek writes.
Today's Lawfare Daily is a Fordham Law panel where @qjurecic.bsky.social, @josephcox.bsky.social, @orlylobel.bsky.social, Aziz Huq, and @jtlg.bsky.social discussed the role technology has played a role in supporting or undermining democracy.
Customs and Border Protection has swabbed the DNA of migrant children as young as 4, whose genetic data is uploaded to an FBI-run database that can track them if they commit crimes in the future.
By @wired.com:
Check out the latest from the SSRN #blog which includes a selection of recent #research on #cybersecurity & data privacy.
Read more: http://spkl.io/63320ffily
#Academicsky #AcademicChatter #dataprivacy