🚨New working paper w/ the legendary @cbwlezien.bsky.social!
Lots of talk about inflation & voters' political views. But how well do voters actually understand info about inflation?
We find that most citizens conflate changes in rates w/ changes in prices. This has important consequences...
Posts by Scott Clifford
CALL FOR EDITOR -
Journal of Experimental Political Science (@jepsjournal.bsky.social) seeks an editor or editorial team from January 2027.
Find out how to apply - https://cup.org/4ae3Tul
cc @experimentsapsa.bsky.social
Deadline for proposals - April 15, 2026.
Probably doesn't matter, but B. You care more about effect on DV than mechanism.
Now more than ever we encounter each other's emotions about politics, on social media and in the news. When do we take strangers' emotional expression seriously, and when do we dismiss it as inappropriate or even insincere? You can find out now in our Cambridge Element!
a glimmer of glad tidings. my short book with @matthewhitt.bsky.social — Supremely Polarizing — is available FOR FREE. it nicely summarizes and extends our work on how partisanship shapes support for the Supreme Court. thank you for your attention to this matter.
New evidence on American support for political violence — from @scottclifford.bsky.social, @llopez.bsky.social and Lucas Lothamer.
(@johnsides.bsky.social @goodauth.bsky.social)
More, via Opinion Today:
opiniontoday.substack.com/p/260329
New working paper: Rethinking Misinformation Interventions. The field has spent years searching for the one intervention that will solve misinformation. This search is the wrong approach — and our disappointment says more about our expectations than our tools. (1/5)
osf.io/preprints/so...
New paper w/ @yamilrvelez.bsky.social! A lot of great research on political microtargeting discounts personalization: tailored ads (using AI or not) rarely beat a single-best message. We define two types of microtargeting, clarify when tailoring matters, & showcase a novel audio-based design.
abstract: While attempts to change Americans’ partisanship via persuasive treatments largely fail, partisanship can and does change over time. In this paper, the authors first confirm, via survey and field experiments, that typical campaign messaging in the United States does not budge partisanship. The authors then present experiments in which participants encounter extraordinary hypothetical scenarios (e.g. one party causes economic collapse) before reporting what their partisanship would be under such circumstances. Twelve percent of partisans imagine switching parties in the pro-out-party hypothetical conditions, compared with 5% in the control hypotheticals in which the status quo persists, for a seven-percentage point (SE 1.5 points) difference. These hypothetical shifts are on par with the largest changes in American macropartisanship ever recorded. While the act of ruminating on hypothetical scenarios is not followed by changes in partisanship measured post-treatment, the evidence suggests that extraordinary world events may be able to shift partisan affiliation.
New paper with Don Green and @ethanvporter.bsky.social in the QJPS. After much deliberation, we went with a title that just states the result. 📝
journal: www.emerald.com/qjps/article...
Francisco was a great scholar and a great guy who will be always missed. I am really happy that one of the fellowships is named after him
A One-Page Primer on: Statistical Power from @carlislerainey.bsky.social www.carlislerainey.com/blog/2025-08...
New short paper w @jkalla.bsky.social !
Candidates gain from moderation, but less than many theories expect.
Many conclude voters must not care about issues.
This is wrong. Small *average* effects mask large effects on specific issues & are consistent with widespread issue-based voting 🧵
Strength In Numbers is looking for a smart part-time survey research assistant (or a few)
www.gelliottmorris.com/p/strength-i...
Ever wondered about whether to use transition statements in your surveys? Trent Ollerenshaw and I have written a blog post for the #YouGov Methodology Matters series! Read it here: yougov.com/en-us/articl...
Not sure offhand and don't have the data handy, but I would bet that trolls are mostly young men (assuming we trust them to report their demos). But I doubt that would explain away associations between violence and age and gender
Thanks! The vast majority are choosing outparty leaders, though it's more diffuse when it's unclear who the party leader is. Most other actors are politicians too, but some other elites get mentioned.
For those interested in measuring political violence, check out Lily and Nathan's new review paper below. See also my forthcoming paper at POQ (w/ @llopez.bsky.social and Lucas Lothamer) introducing our own measure scottaclifford.com/wp-content/u...
Thanks!
Looks interesting! Can you share a link to an ungated version?
After nearly a decade measuring American public support for political violence, @nathankalmoe.bsky.social and I have published a somewhat comprehensive guide to measuring these attitudes. This includes historical comparisons and responses to common critiques. doi.org/10.1093/poq/...
My take on the partisan expressive responding literature is now in print. Open access: doi.org/10.1017/S000...
My job market paper is now available as a preprint! 🚨
Using survey evidence with a conjoint experiment, I test how state-level immigrant integration policy features affect perceptions of fairness and support.
3 key points, the big takeaway, the link, and a bonus below⬇️🧵
The research and analytics team at @statesunited.org is searching for a researcher to support our survey research program. Come join our fully remote team! Great mission, great pay, and excellent benefits.
recruiting.paylocity.com/recruiting/j...
New w/@scottclifford.bsky.social.
Lots of work uses agree-disagree scales, and a lit review shows these are 1) frequently just measured in one direction (agree = higher trait) and 2) correlated with each other.
This has potentially big issues for conclusions.
link.springer.com/article/10.1...
🚨 New paper out at @ajpseditor.bsky.social 🚨
Do the public hold meaningful attitudes? Using the case of abortion policy preferences, we provide strong evidence that policy prefrences can be coherent, stable over time, and causally explain vote choice.
doi.org/10.1111/ajps...
Very excited to see this out at @bjpols.bsky.social! In this article, I show that contemporary political news coverage makes it challenging for readers to learn information that is helpful for democratic accountability, even for very politically engaged audiences.
A brief summary:
Nick Vivyan, Chris Hanretty (@chanret.bsky.social) and I have a new book out: “Idiosyncratic Issue Opinion and Political Choice”. The core of the book is making the argument that citizens’ views about political issues neither reduce to an ideological orientation nor to a lack of substance. (1/10)
🚨📄 New paper (conditional accepted at @thejop.bsky.social):
We test whether social desirability bias actually distorts answers in online surveys.
Short version:
It mostly doesn’t.
w. @timallinger.bsky.social @kristianvsf.bsky.social @morganlcj.bsky.social
URL: osf.io/preprints/os...
a graph showing JEPS has less selection on significance than other journals
When we look across journals, we see the same patterns repeated. The main exception is the Journal of Experimental Political Science, which has the highest rate of null-only reporting and lowest rate of rejection-only reporting. Kudos to them.
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.