Thanks for your interest in the work! One clarification: we don't use the Amazon star ratings as a measure of book quality. Rather, we use the _number of ratings_ a book has received as a proxy for usage/sales. The # of ratings and sales are pretty highly correlated, as we show in the paper.
Posts by Joel Waldfogel
Bottom line: the enormous growth in new books brings mostly junk but some works that attract usage, even they aren't quite Tolstoy, Grisham, or E.L James. So it’s positive for consumers. n/n
What matters for welfare is the stuff near the head of the distribution. And the enormous growth in the total number of new titles delivers growth in the absolute number of new books above modestly-high usage thresholds, although not at the very top. 5/n
We compare categories of books with large vs small growth, assuming that LLM usage drives the growth. And average “quality” – measured by the number of ratings books have received as of early 2026 – falls for the books born into larger genre/month birth cohorts. 4/n
More is not necessarily better if it’s all junk. So, how good/useful/purchased are the new books? Hard to say exactly, of course. But we can look at the number of ratings that each book eventually gets, which is strongly correlated with sales. 3/n
First, they sure have goosed the quantity of new books, with a tripling in the number of new books appearing at Amazon each month. 2/n
Can LLMs help humans to write books? TLDR: sort of.
@imkereimers.bsky.social and I have a new paper with a longer answer. 1/n www.nber.org/papers/w34777
I had a lot of fun discussing the growth of female authorship on the Today, Explained podcast. open.spotify.com/episode/2XGc...
Our bottom line: there is a lot of regret and missed opportunity in current differentiated product consumption (and not just with gifts).
...and the relationship between individuals’ tendency to own games and the games’ average playtimes supports this assumption. We do a bunch of other stuff to explore robustness.
Nerdy caveats: Our users choose among 100 games; we make the bundle choice problem tractable by specifying utility as a function of hours of playtime and money spent. This presumes that marginal utilities of different games are proportional to the playtime they deliver.
Second, using a model of game bundle choice, we measure the welfare gain from full information as the loss of money to bring consumers from full information down to their status quo choices. Full information would raise CS by 30 percent more than baseline expenditure.
We develop a two-part model for measuring the welfare gain with full information. First, full information would raise the size of the budget set by allowing consumers to buy the games yielding the most playtime first.
We have unusual data on post-purchase usage providing big hints about regret: Users could have achieved 90 percent of their status quo playtime with 60 percent less expenditure.
We don’t know much about this, for two reasons. 1) we rarely see post-purchase usage, 2) welfare analysis proceeds from assumptions of revealed preference. If you paid 10 dollars for something, it must have been worth at least 10 dollars to you.
With differentiated products and heterogeneous consumers, it may be hard for choices to deliver maximal welfare. We might regret choices we make, and we might miss out on products we would have enjoyed. @imkereimers.bsky.social, Christoph Riedl, and I explore this www.nber.org/papers/w33401
The EU's Digital Markets Act has outlawed self-preferencing by big platforms. Shortly after Amazon's designation as a "gatekeeper" in 9/23, the search rank advantage for Amazon-brand products fell a bunch. Working paper: www.nber.org/papers/w32299