Advertisement · 728 × 90

Posts by Ignacio Cofone

And because disclosure doesn't have to be all-or-nothing (it can be staged, partial, or mediated through auditors), the domain of genuinely justifiable opacity turns out to be much narrower than secrecy claims suggest.

1 week ago 0 0 0 0

They serve a second function: they reveal whether the decision-maker aligns the algorithm with the public interest or uses opacity to shield self-serving design choices. Where those patterns suggest misalignment, the paper proposes rebuttable presumptions favoring disclosure.

1 week ago 1 0 1 0

The paper develops a test for when disclosure obligations should apply even when gaming or trade secrecy concerns exist. Courts and regulators already look at error patterns and outcome distributions across groups, but typically as evidence of bias.

1 week ago 0 0 1 0

We started this project ~6 years ago after "Strategic Games & Algorithmic Secrecy". Gaming & trade secrecy are standard justifications for AI opacity, but they mask self-serving behavior by decision-makers who benefit from opacity for reasons that have nothing to do with either

1 week ago 0 0 1 0
Algorithmic Opacity as a Principal-Agent Problem <p><span>Principal-agent problems are central to debates over algorithmic transparency, but have been underexplored. Concerns about gaming and trade secrecy, fr

Very glad my paper with Katherine Strandburg, "Algorithmic Opacity as a Principal-Agent Problem," is now forthcoming in the NYU Journal of Intellectual Property papers.ssrn.com/sol3/papers....

1 week ago 2 0 1 0
The Privacy Paradox Is A Misnomer: Data Under Structural Uncertainty The infamous privacy paradox refers to the apparent inconsistency between people's stated concern for privacy and their readiness to disclose personal informati

Two new articles by ISP fellow @ignaciocofone.bsky.social :

The Privacy Paradox Is a Misnomer: Data Under Structural Uncertainty : papers.ssrn.com/sol3/papers....

-and-

Consent, Design, and Deceit: A Bottom-up Proposal for Regulating Dark Patterns:
Link: ssrn.com/abstract=569...

3 weeks ago 3 1 1 0

Really glad to see this piece published. Stav does something the control debate in privacy law needed: she disaggregates the critiques, showing they operate at diff levels & the relevant arguments are different. Highly recommend it to anyone with views in favor or against control/consent in privacy

2 weeks ago 1 1 0 0
Canadian Privacy Law: Cases and Comparative Materials | Emond Publishing Browse Emond's full collection of books for Canadian law school, college, and university programs, as well as legal practice.

The focus is on how Privacy Law actually works (and why it’s a field): how consent operates across contexts, how regimes like PIPEDA relate to public and private law, and how similar problems arise across domains.
emond.ca/Store/Books/...

2 weeks ago 0 0 0 0
Advertisement
Post image

Thrilled that Canadian Privacy Law: Cases and Comparative Materials is now published by Emond. It comes out of six years of teaching Privacy Law at McGill and it's the first casebook to cover Canadian Privacy Law as a whole, integrating torts, statutes, criminal law, and constitutional law

2 weeks ago 7 0 1 0

Thanks for sharing them!

3 weeks ago 1 0 0 0
Post image Post image

At this week’s Ideas Lunch, we were delighted to host Prof. @ignaciocofone.bsky.social and Prof. Katherine Strandburg for a fascinating talk on “Algorithmic Opacity as a Principal-Agent Problem.” Thank you both for such a thoughtful discussion.

1 month ago 5 1 1 0

Looking forward to joining you tomorrow!

1 month ago 1 0 0 0
Preview
Grok, Deepfakes, and the Collapse of the Content/Capability Distinction The Grok case suggests that effective AI regulation may come not from comprehensive AI-specific frameworks, but from applying existing harm-based laws to new capabilities.

The UK and France’s response to the Grok deepfake case suggests that effective AI regulation may not come from comprehensive AI-specific frameworks, but from the proper application of existing harm-based approaches to new capabilities, writes @ignaciocofone.bsky.social:

2 months ago 27 13 0 1

Thank you!

3 months ago 0 0 0 0

Can’t wait to read this!

For our part the @lco-cdo.bsky.social 2024 Consumer Protection Project recommended Ontario regulate consumer notice to include “market contexts” - plain language descriptions of systems & real risks ie “structural uncertainties.” See p 33-36: www.lco-cdo.org/wp-content/u...

3 months ago 1 1 0 0

TLDR: people agree to data practices while valuing privacy because risk is indeterminate at the time of agreement

3 months ago 0 0 0 0

Regulators: treat consent as contingent on uncertainty reduction, with notices that focus on risks rather than technical aspects. Shift focus to redesigning the decision environment: away from attention to default settings and towards whether the decision environment makes harms legible

3 months ago 1 0 1 0
Advertisement

This shifts the problem from self-control to information conditions, which operate as a market failure. Because structural uncertainty drives agreement contrary to preferences, good laws reduce uncertainty and keeps choices flexible

3 months ago 0 0 1 0
The Privacy Paradox Is A Misnomer: Data Under Structural Uncertainty The infamous privacy paradox refers to the apparent inconsistency between people's stated concern for privacy and their readiness to disclose personal informati

Happy to share “The Privacy Paradox is a Misnomer: Data Under Structural Uncertainty” (GTLJ 2026) which empirically shows uncertainty about downstream data uses and consequences, rather than unstable or contradictory preferences, drives the so-called privacy paradox papers.ssrn.com/sol3/papers....

3 months ago 7 0 3 0
Taxonomizing Synthetic Data for Law Synthetic data is increasingly important in data usage and AI design, creating novel legal and policy dilemmas. All too often, discussions of synthetic data tre

ISP Fellow @ignaciocofone.bsky.social publishes in Iowa Law Review about "Taxonomizing Synthetic Data for Law"

papers.ssrn.com/sol3/papers....

5 months ago 3 1 0 0

Norway’s Court of Appeal just upheld the historic fine against Grindr for (unlawful) sharing its users data with third parties. It’s an important step in considering inferences personal data (app-level identifiers as processing that reveals sexual orientation) www.datatilsynet.no/contentasset...

5 months ago 0 0 0 0

Some implications: privacy risks include both leakage and group-based inferences; data quality depends on valid assumptions; competition effects vary by type. Regulators should check the ground-truth claims that synthetic data encodes when differentiating among types

6 months ago 0 0 0 0

Ground-truth taxonomy based on G&L: (1) transformed data modifies collected data for an end use; (2) augmented data adds to collected data from modeled structure often to improve fidelity; (3) simulated data is generated from background models rather than records

6 months ago 0 0 1 0
Taxonomizing Synthetic Data for Law Synthetic data is increasingly important in data usage and AI design, creating novel legal and policy dilemmas. All too often, discussions of synthetic data tre

Happy to share our new piece with Katherine Strandburg & Nicholas Tilmes, “Taxonomizing Synthetic Data for Law.” It engages Gal & Lynskey’s excellent article & centers the role of ground-truth assumptions. The key q is how creation methods encode claims about the world ssrn.com/abstract=555...

6 months ago 1 1 1 0

As many know, @bjard.bsky.social and I have been drafting a Technology Law coursebook for a few years. We've used it to teach classes at three institutions, including Yale Law School, and others have used chapters in their techlaw classes.

We're excited to share the current version more broadly!

7 months ago 13 7 2 1

Always read @ignaciocofone.bsky.social, including accidental legal history.

7 months ago 4 1 0 0
Preview
Generative AI Regulation in the US and Canada AbstractThe US and Canada regulate generative AI in different ways for the public and private sectors. They both have federal frameworks that set AI-specif

Glad to see this chapter published. I always found history of law quite interesting, and I never thought I would accidentally do it by writing in 2023-2024 a chapter focused on two now dead pieces of legislation! academic.oup.com/edited-volum...

7 months ago 2 0 0 1
Advertisement
Post image

What a nice surprise to find this review of The Privacy Fallacy in the Society for Technical Communication by Donald Riccomini. Thankful to the reviewer for engaging the book and the claim that we need a new type of accountability www.jstor.org/stable/27373...

7 months ago 2 0 0 0
Preview
Protecting Consumers in a Post-Consent World | Stanford Law Review In Charting a New Course on Digital Consumer Protection at the Federal Trade Commission, former FTC Chair Lina Khan and her co-authors Samuel Levine a

My article "Protecting Consumers in a Post-Consent World," about how we can broaden antitrust and consumer protection to deal with the fact that we have abandoned notice and consent in contract law, is now published in the Stanford Law Review Online.

www.stanfordlawreview.org/online/prote...

7 months ago 22 9 1 1
Preview
Opinion: Don’t hate ChatGPT-5. Your chatbot is not your friend The reaction to OpenAI’s new GPT-5 system shows the risks of making a chatbot too humanlike

As other recent news of death by suicide, this shows CSR in AI requires building products that avoid fostering addiction and are less parasocial. Reducing sycophancy and downplaying the illusion of personality mitigates risks of unhealthy AI reliance
www.theglobeandmail.com/business/com...

7 months ago 0 0 0 0