SCORE, a collaboration of 865 researchers, is now released as three papers in Nature, six preprints, and a lot of data (cos.io/score/). SCORE examined repeatability of findings from the social-behavioral sciences and tested whether human and automated methods could predict replicability.
Posts by Patrik Michaelsen
Psychology has a whole cottage industry in which people come up with some construct that is essentially "attitudes/beliefs/expectations/feelings about X", and then the central claim is that this construct is a super important determinant of future X outcomes.>
Screenshot of the "Does that use a lot of energy?" online app
Hannah Ritchie has built a fun little tool where you can compare energy usage of various products and activities.
This is super helpful imho, because it's so hard to develop intuitions even just about the scales involved here.
hannahritchie.substack.com/p/does-that-...
Today the APA journal that published that meta-analysis rejected our commentary - primarily because our findings were not interesting enough. What the actual fuck!
We recently submitted a commentary on a very influential meta-analysis. We found that: 1) 40% of relevant literature had not been identified because of lazy search, 2) a few large N included studies did not meet stated inclusion criteria, and 3) that almost all sig. moderator findings were wrong.
In 1975-1977, the Swedish Government carried out an official investigation into the future of electronic music. Where is this kind of leadership today I ask
If you set out to test a hypothesis, you should preregister it. If you deviate from a preregistration, report a table with all deviations, and evaluate the consequences for the validity and severity of the test. As a reviewer, ask for such a table!
online.ucpress.edu/collabra/art...
It's ironic to see a discipline care **so much** about unbiasedness (causal inference!) at the level of a single test but then have a research production system and culture that is basically a ferocious bias generation machine. This is not good.
It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
The Iowa Gambling Task is an extreme example of Jingle Fallacy and schmeasurement.
In 100 articles we found 244 different ways of scoring it, 177 were never reused. Correlations between them range -.99 to .99.
At the same time, we show meta-analyses combine these results as if they’re equivalent.
Across 8 countries, large majorities back the #30x30 goal.
Support grows when all nations share protection duties, richer nations pay more, more countries join in and “buying protection abroad” is barred. At home, people prefer nature-first siting and polluter-pays funding. https://bit.ly/4jALfRy
After a provocation from Todd Kashdan I wrote up some frustrations I have with "wellbeing science", specifically the idea that it was born with life satisfaction scales.
I think this belief is untrue and unhelpful.
profmarkfabian.substack.com/p/airing-my-...
We have reached a situation where (1) the time/resources spent by people applying for grant X often outweighs (2) the time/resources awarded.
For these grants, society loses net time/resources.
www.nature.com/articles/d41...
An email from Martin Peterson to university administrators.
Martin Peterson's creative response to being banned from teaching Plato (shared with his permission).
Related: ”The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles”
doi.org/10.1177/0149...
Environmental social science post doc position in Gothenburg with highly recommended colleagues
Title, authors’ names, abstract, and keywords from a paper about public support for the global 30-by-30 biodiversity conservation targets based on a survey in eight countries
Achieving the global 30by30 #biodiversity conservation targets requires political compromises & navigating conflicts. @michaelsen.bsky.social et al. found strong public support for the targets in 8 countries, which suggests expansion of protected areas is politically feasible doi.org/10.1073/pnas...
Possibly of interest to:
@worldwildlife.org
@greenpeace.org
@greenpeace.eu
@aspca.org
@theclimatereality.bsky.social
@oxconservationsoc.bsky.social
@climatecentral.org
@wclnews.bsky.social
@lgspace.bsky.social
@naturebasedsols.bsky.social
@sierraclub.org
Possibly of interest to
@ipbes.net
@unep.org
@unbiodiversity.bsky.social
@thegef.bsky.social
@society4conbio.bsky.social
@scbeurope.bsky.social
@biodivoxford.bsky.social
@nature.org
@science.nature.org
@globallf.bsky.social
@conservationorg.bsky.social
@protectparks.bsky.social
@wcs.org
A visual representation of a discrete choice experiment with results separated by country
A second experiment on domestic-level policy regimes shows similar, but somewhat more diverse, results across countries.
Results include a widespread preferences of protected areas that prioritize nature values (even over social or economic), and general dislike of funding PAs through general taxes
A visual representation of a discrete choice experiment with results separated by country
Experimentally, we find highly consistent policy preferences for international-level expansion regimes.
Results include widespread preferences for rich countries bearing higher costs, and generally that each country should protect 30% (instead of e.g., according to conservation benefits)
Nine histograms displaying support levels from individual countries
We find 30x30 support levels in the range of 80-90% for Argentina, Brazil, India, Indonesia, South Africa and Spain samples.
Swedish (66% in favor) and USA (71%) respondents show strong majority support, albeit at comparatively lower levels.
New: Strong global support for the 30x30 conservation target
*Data from 5 continents (N=12k) show 82% in support of 30x30
*2 experiments find highly consistent expansion policy preferences, incl. prioritization of nature and rich countries bearing higher costs
Out now OA in @pnas.org. Viz. below.
My own branch has replicated several times
^this is true
Hey! If social psychology could read they’d be very upset
Figure 1 of the paper
🚨New paper!🚨
Meta-analysis on 4M p-values across 240k psych articles: How has psychology changed since the replication crisis began? How is replicability linked to citations, impact factor, and university prestige? 🧵
Paper: journals.sagepub.com/doi/10.1177/...
Interactive: pbogdan.com/meganal
Haven't read this yet, but this seems very important for #ExpEcon folks. 22–27% failed comprehension in the DG & UG; in the Trust Game and Public Goods Game, that number hit 70% and 52%. doi.org/10.1016/j.je... (Note saw posted on other site but author doesn't seem to be here, so making a new post)
🚨 New paper out in @pnas.org 🚨
Together with Armin Granulo and Christoph Fuchs, we explore how people respond to system-level policies—like bans or mandates— 𝘣𝘦𝘧𝘰𝘳𝘦 vs. 𝘢𝘧𝘵𝘦𝘳 they are implemented.
Paper 🔗 doi.org/10.1073/pnas...
Preprint 🔗 osf.io/preprints/ps...
Open materials 🔗 osf.io/6qajn/
A new blog post by Lauren Yehle, @michaelsen.bsky.social, Niklas Harring, and Sverker C. Jagers - t.ly/cw_CR !
The article “Conservation for nature and wildlife’s sake: the effects of (non-)anthropocentric ethical justifications on policy acceptability” is available here: t.ly/HWHfQ