And because disclosure doesn't have to be all-or-nothing (it can be staged, partial, or mediated through auditors), the domain of genuinely justifiable opacity turns out to be much narrower than secrecy claims suggest.
Posts by Ignacio Cofone
They serve a second function: they reveal whether the decision-maker aligns the algorithm with the public interest or uses opacity to shield self-serving design choices. Where those patterns suggest misalignment, the paper proposes rebuttable presumptions favoring disclosure.
The paper develops a test for when disclosure obligations should apply even when gaming or trade secrecy concerns exist. Courts and regulators already look at error patterns and outcome distributions across groups, but typically as evidence of bias.
We started this project ~6 years ago after "Strategic Games & Algorithmic Secrecy". Gaming & trade secrecy are standard justifications for AI opacity, but they mask self-serving behavior by decision-makers who benefit from opacity for reasons that have nothing to do with either
Very glad my paper with Katherine Strandburg, "Algorithmic Opacity as a Principal-Agent Problem," is now forthcoming in the NYU Journal of Intellectual Property papers.ssrn.com/sol3/papers....
Two new articles by ISP fellow @ignaciocofone.bsky.social :
The Privacy Paradox Is a Misnomer: Data Under Structural Uncertainty : papers.ssrn.com/sol3/papers....
-and-
Consent, Design, and Deceit: A Bottom-up Proposal for Regulating Dark Patterns:
Link: ssrn.com/abstract=569...
Really glad to see this piece published. Stav does something the control debate in privacy law needed: she disaggregates the critiques, showing they operate at diff levels & the relevant arguments are different. Highly recommend it to anyone with views in favor or against control/consent in privacy
The focus is on how Privacy Law actually works (and why it’s a field): how consent operates across contexts, how regimes like PIPEDA relate to public and private law, and how similar problems arise across domains.
emond.ca/Store/Books/...
Thrilled that Canadian Privacy Law: Cases and Comparative Materials is now published by Emond. It comes out of six years of teaching Privacy Law at McGill and it's the first casebook to cover Canadian Privacy Law as a whole, integrating torts, statutes, criminal law, and constitutional law
Thanks for sharing them!
At this week’s Ideas Lunch, we were delighted to host Prof. @ignaciocofone.bsky.social and Prof. Katherine Strandburg for a fascinating talk on “Algorithmic Opacity as a Principal-Agent Problem.” Thank you both for such a thoughtful discussion.
Looking forward to joining you tomorrow!
The UK and France’s response to the Grok deepfake case suggests that effective AI regulation may not come from comprehensive AI-specific frameworks, but from the proper application of existing harm-based approaches to new capabilities, writes @ignaciocofone.bsky.social:
Thank you!
Can’t wait to read this!
For our part the @lco-cdo.bsky.social 2024 Consumer Protection Project recommended Ontario regulate consumer notice to include “market contexts” - plain language descriptions of systems & real risks ie “structural uncertainties.” See p 33-36: www.lco-cdo.org/wp-content/u...
TLDR: people agree to data practices while valuing privacy because risk is indeterminate at the time of agreement
Regulators: treat consent as contingent on uncertainty reduction, with notices that focus on risks rather than technical aspects. Shift focus to redesigning the decision environment: away from attention to default settings and towards whether the decision environment makes harms legible
This shifts the problem from self-control to information conditions, which operate as a market failure. Because structural uncertainty drives agreement contrary to preferences, good laws reduce uncertainty and keeps choices flexible
Happy to share “The Privacy Paradox is a Misnomer: Data Under Structural Uncertainty” (GTLJ 2026) which empirically shows uncertainty about downstream data uses and consequences, rather than unstable or contradictory preferences, drives the so-called privacy paradox papers.ssrn.com/sol3/papers....
ISP Fellow @ignaciocofone.bsky.social publishes in Iowa Law Review about "Taxonomizing Synthetic Data for Law"
papers.ssrn.com/sol3/papers....
Norway’s Court of Appeal just upheld the historic fine against Grindr for (unlawful) sharing its users data with third parties. It’s an important step in considering inferences personal data (app-level identifiers as processing that reveals sexual orientation) www.datatilsynet.no/contentasset...
Some implications: privacy risks include both leakage and group-based inferences; data quality depends on valid assumptions; competition effects vary by type. Regulators should check the ground-truth claims that synthetic data encodes when differentiating among types
Ground-truth taxonomy based on G&L: (1) transformed data modifies collected data for an end use; (2) augmented data adds to collected data from modeled structure often to improve fidelity; (3) simulated data is generated from background models rather than records
Happy to share our new piece with Katherine Strandburg & Nicholas Tilmes, “Taxonomizing Synthetic Data for Law.” It engages Gal & Lynskey’s excellent article & centers the role of ground-truth assumptions. The key q is how creation methods encode claims about the world ssrn.com/abstract=555...
As many know, @bjard.bsky.social and I have been drafting a Technology Law coursebook for a few years. We've used it to teach classes at three institutions, including Yale Law School, and others have used chapters in their techlaw classes.
We're excited to share the current version more broadly!
Always read @ignaciocofone.bsky.social, including accidental legal history.
Glad to see this chapter published. I always found history of law quite interesting, and I never thought I would accidentally do it by writing in 2023-2024 a chapter focused on two now dead pieces of legislation! academic.oup.com/edited-volum...
What a nice surprise to find this review of The Privacy Fallacy in the Society for Technical Communication by Donald Riccomini. Thankful to the reviewer for engaging the book and the claim that we need a new type of accountability www.jstor.org/stable/27373...
My article "Protecting Consumers in a Post-Consent World," about how we can broaden antitrust and consumer protection to deal with the fact that we have abandoned notice and consent in contract law, is now published in the Stanford Law Review Online.
www.stanfordlawreview.org/online/prote...
As other recent news of death by suicide, this shows CSR in AI requires building products that avoid fostering addiction and are less parasocial. Reducing sycophancy and downplaying the illusion of personality mitigates risks of unhealthy AI reliance
www.theglobeandmail.com/business/com...