Compare with @dacemoglumit.bsky.social et al. (2026) who propose garbling (degrading AI accuracy); our decomposition leads to different instruments: encouraging public sharing of AI solutions (logging) offsets the flow margin, preserving AI capability. Offsetting resolution requires user engagement.
Posts by Keh-Kuan Sun
The model yields a discriminating prediction: volume decline alone with stable/rising resolution would indicate private diversion; but if posted volume & resolution conditional on posting both decline, then the answerer pool is thinning. They require different fixes.
This started from @joshgans.bsky.social's insight that when AI diverts easy questions, the remaining pool becomes more valuable. This holds at a static level. But as AI raises their outside option, the answerers leave, and resulting congestion can overwhelm the composition improvement.
The two margins represent different shocks. The flow margin is a demand shock: AI resolves problems privately before they reach the platform. The resolution margin is a supply-side shock: AI raises the return to working alone, contributors exit, posted questions go unanswered.
New paper: AI helps users solve problems faster, but the answers stay private. Fewer questions reach public, and the archives future users depend on shrink. I decompose this into two margins: flow of questions posted vs. resolution conditional on posting.
arxiv.org/abs/2604.00468
We axiomatize a representation that nests EU, implies context-dependent independence, and is unique up to affine transformation. After all, independence was not a sufficient norm.
In our model language, the shift in the worst outcome changes the context: how likely disappointment is, which changes the utility function, which flips the ranking. Within any fixed context, independence still holds. Violations arise only across contexts.
A simple example: ($0, 0.9; $100, 0.05; $200, 0.05) vs ($0, 0.9; $150, 0.1). Now swap $0 for $90 in both. People reverse their choices. Looks like Allais, but same probabilities, same ranks. No rank-dependent model can generate this.
The independence axiom is what makes expected utility work: normatively clean, mathematically elegant. Problem? People violate it systematically. We've known this since 1953. Many non-EU models relax it through prob weighting. We take a different route. arxiv.org/abs/2411.13823