1/ Why do inferior but popular things remain popular?
Excited to share our new paper with @pantelispa.bsky.social, @glemens.bsky.social & @arnoutvanderijt.bsky.social
"The marginal majority effect: When social influence produces lock-in"
www.science.org/doi/epdf/10....
Posts by Gael Le Mens
📢 Publication alert: Governments are investing in digitalization intensively, but we know little about who supports this policy. We investigate this question in this new article with Alex Kuo, @retobuergisser.bsky.social, and @siljahausermann.bsky.social published at JEPP
doi.org/10.1080/1350... 🧵
Currently in FirstView: “Positioning Political Texts with Large Language Models by Asking and Averaging.” @glemens.bsky.social and @ainagallego.bsky.social use a variety of LLMs to take in political texts and position political actors in policy and ideological space.
Title Authors Abstract (Decision under Risk are Decisions Under Complexity: Comment)
A new working paper with Daniel Banki, @urisohn.bsky.social and Robert Walatka, just submitted to SSRN.
The paper is comment on Ryan Oprea's recent AER paper.
The paper is processing, but you, my friends, get early entry.
papers.ssrn.com/sol3/papers....
Can DeepSeek accurately scale political text? @glemens.bsky.social just replicated the analyses of our joint Political Analysis paper with DeepSeek and with the newest version of Llama (Llama 3.3 70B). Overall, DeepSeekV3 performs at a level similar to Llama 3.3, but definitely not better. 1/5
thank you for the pointer! I was not aware of this interesting paper. It was probably developed concurrently to ours. Though the presentation of ideas is different, I was happy to see that the overall perspective the authors advance is consistent with how we think of LLMs as measurement tools.
Note also that we advocate the use of open models that can be run locally - this is crucial for ensuring replicability.
Bart, we do no promise unbiased results... We provide evidence of high accuracy with respect to a benchmark based on human coding. As pointed out in another response, Egami et al's DSL approach can improve on our approach. The field is moving fast (our paper was written a year ago).
But clearly, one can (should) use the DSL technique on the position estimates produced by our approach since the high accuracy we obtain does not guarantee the absence of systematic bias wrt human coding.
Thanks Nicolai for the pointer! Indeed Egami et al.'s technique came out after we wrote the core of our piece.
📢 Our new paper in Political Analysis explains how to use LLMs like GPT-4o, Llama or Mistral to estimate the ideological and policy position of political texts. Our approach is fast, reliable, cost-effective and reproducible and works with texts written in different languages 1/7 cup.org/4axBEXo