Advertisement · 728 × 90

Posts by Keh-Kuan Sun

Compare with @dacemoglumit.bsky.social et al. (2026) who propose garbling (degrading AI accuracy); our decomposition leads to different instruments: encouraging public sharing of AI solutions (logging) offsets the flow margin, preserving AI capability. Offsetting resolution requires user engagement.

2 weeks ago 0 0 0 0

The model yields a discriminating prediction: volume decline alone with stable/rising resolution would indicate private diversion; but if posted volume & resolution conditional on posting both decline, then the answerer pool is thinning. They require different fixes.

2 weeks ago 0 0 1 0

This started from @joshgans.bsky.social's insight that when AI diverts easy questions, the remaining pool becomes more valuable. This holds at a static level. But as AI raises their outside option, the answerers leave, and resulting congestion can overwhelm the composition improvement.

2 weeks ago 0 0 1 0

The two margins represent different shocks. The flow margin is a demand shock: AI resolves problems privately before they reach the platform. The resolution margin is a supply-side shock: AI raises the return to working alone, contributors exit, posted questions go unanswered.

2 weeks ago 0 0 1 0
Preview
When AI Improves Answers but Slows Knowledge Creation: Matching and Dynamic Knowledge Creation in Digital Public Goods Generative AI helps users solve problems more efficiently, but without leaving a public trace. Fewer discussions and solutions reach public platforms, and the archives that future problem-solvers depe...

New paper: AI helps users solve problems faster, but the answers stay private. Fewer questions reach public, and the archives future users depend on shrink. I decompose this into two margins: flow of questions posted vs. resolution conditional on posting.
arxiv.org/abs/2604.00468

2 weeks ago 2 0 1 0

We axiomatize a representation that nests EU, implies context-dependent independence, and is unique up to affine transformation. After all, independence was not a sufficient norm.

2 weeks ago 1 0 0 0

In our model language, the shift in the worst outcome changes the context: how likely disappointment is, which changes the utility function, which flips the ranking. Within any fixed context, independence still holds. Violations arise only across contexts.

2 weeks ago 1 0 1 0

A simple example: ($0, 0.9; $100, 0.05; $200, 0.05) vs ($0, 0.9; $150, 0.1). Now swap $0 for $90 in both. People reverse their choices. Looks like Allais, but same probabilities, same ranks. No rank-dependent model can generate this.

2 weeks ago 1 0 1 0
Preview
Non-Allais Paradox and Context-Dependent Risk Attitudes We provide and axiomatize a representation for preferences over lotteries that generalizes the expected utility model. Since the representation uses different utility functions to evaluate different l...

The independence axiom is what makes expected utility work: normatively clean, mathematically elegant. Problem? People violate it systematically. We've known this since 1953. Many non-EU models relax it through prob weighting. We take a different route. arxiv.org/abs/2411.13823

2 weeks ago 3 0 1 0
Advertisement