Advertisement · 728 × 90

Posts by Joris Frese

Null by Design: Statistical Dilution in Immigration-Crime Research | British Journal of Political Science | Cambridge Core Null by Design: Statistical Dilution in Immigration-Crime Research - Volume 56

Interesting paper on minimum detectable effects and underpoweredness
www.cambridge.org/core/journal...

8 hours ago 10 3 0 0

No worries at all, and that's great to hear! Looking forward to your follow up results.

3 days ago 0 0 0 0
Post image

link.springer.com/article/10.1...
I conducted a similar study a while back, looking at EU+NA. Among German departments, Konstanz and Mannheim placed the largest numbers of their PhD graduates into professorships at high-ranking European departments ;)

3 days ago 6 1 2 0
Preview
European Union and United Kingdom take decisive step towards Erasmus+ association in 2027 Today, the European Union and the United Kingdom have enabled the UK\'s association to Erasmus+ in 2027.

🚨🚨Welcome and very significant news on #EU-UK relationship and #brexit reversal: yesterday, the European Union and the United Kingdom have enabled the UK's association to #Erasmus+ in 2027. 🧵
ec.europa.eu/commission/p...

5 days ago 214 91 14 12
Preview
Refugee labor market integration at scale: Evidence from Germany’s fast-track employment program | PNAS Governments face persistent challenges in integrating refugees into the local labor market, and many past interventions have shown limited impact. ...

🚨New Paper in PNAS: "Refugee Labor Market Integration at Scale: Evidence from Germany’s Fast-Track Employment Program"

www.pnas.org/doi/10.1073/... Ungated preprint osf.io/preprints/socarxiv/px9ew_v3

w/ J Hainmueller, D Hangartner, @niklas-harder.bsky.social & E Vallizadeh

#econtwitter #econsky

6 days ago 71 30 1 0

New study out in Nature Human Behaviour: 37 million US users were exposed to deceptive networks on Facebook & 3 million on Instagram during the 2020 elections—roughly 15% and 2% of active users. 🧵

1 week ago 10 7 1 0
Post image Post image

📄Published Today in Nature:

500 researchers reproduced 100 studies across the social & behavioral sciences to assess their analytical robustness (led by @balazsaczel.bsky.social & @szaszibarnabas.bsky.social).

Article: www.nature.com/articles/s41...

Preprint: osf.io/preprints/me...

TLDR: 1/11

2 weeks ago 91 48 2 4

5 reanalyses per paper were the target, but in a few cases, the numbers deviate slightly (e.g., some analysts dropped out of the project, all analyses were peer-evaluated for their soundness and some were deemed unfit, and in a few cases, more analysts than anticipated signed up for the same paper).

2 weeks ago 1 0 0 0

Really excited to see this published! I contributed a small part by doing one of the "robustness replications" as my econ friends call it. Stellar coordination effort by @balazsaczel.bsky.social, @szaszibarnabas.bsky.social et al.!

2 weeks ago 9 2 0 0
Advertisement

No, it didn't model sampling error (github.com/marton-balaz...), and you are definitely right! That said, this just serves as a descriptive visualization, not a formal hypothesis test anyway.

2 weeks ago 2 0 0 0

PS: This is just one of several papers released today under the umbrella of the COS SCORE project (led by @briannosek.bsky.social & Tim Errington). I was not involved in any of the other studies but look forward to reading them. You can find out more about SCORE here: www.cos.io/score.

2 weeks ago 2 0 0 0

This paper could be of particular interest to the polsci community right now, given the intense discussions about replication and robustness to specification changes in the APSR+JOP in recent months. Perhaps this is an opportunity to reflect on this topic from a more birds-eye view. 11/11

2 weeks ago 3 0 1 0
Post image Post image

And: only in one case out of 100 studies did none of the five re-analysts arrive at the same conclusion as the original authors! Curious to hear what others make of these results. 10/11

2 weeks ago 5 0 1 0

While the findings give some grounds for concern, I am personally inclined to read them a bit more optimistically than is reflected in the framing that the lead authors chose: in the large majority of cases, independent re-analysts arrive at the same conclusions. 9/11

2 weeks ago 8 0 1 0
Post image Post image

Another, more optimistic way to look at the results: when moving away from strict effect size estimates, roughly 3/4 of analysts arrived at the same overall conclusions as the original studies (again higher in psychology and in experimental studies). 8/11

2 weeks ago 2 1 1 0
Post image

In line with the overall high proportion of results outside the tolerance region, reproduction effect sizes were on average substantially smaller than originals (compare the linear fit to the perfect diagonal line that would be seen given equivalent effect sizes). 7/11

2 weeks ago 1 0 2 0
Post image

Less surprisingly (given stricter standards for pre-registration and more straightforward identification), experimental studies are more analytically robust than observational ones. Nb, they may well still suffer from various problems that a reproduction on the same data can’t address. 6/11

2 weeks ago 2 2 1 0
Advertisement
Post image

1/3 of reproductions produced results “identical” to the original ones (within a tolerance region of 0.05 Cohens d). Alignment between original and re-analyses was higher in psychology than eg econ (perhaps surprisingly so, given how chiefly psych is associated with the replication crisis). 5/11

2 weeks ago 4 1 1 0

The COS SCORE project set out to reproduce findings from a random draw of 100 social and behavioral science studies (with 5 independent reproductions per study). Reproduction analysts were largely free to make their own modeling choices to re-test the given hypotheses with the same data. 4/11

2 weeks ago 1 0 1 0

How analytically robust are published findings to alternative, reasonable model specifications which the original authors did not explore (or report)? 3/11

2 weeks ago 1 0 1 0
Post image

The paper is based on the premise that data can be analyzed in different justifiable ways to answer the same research question. Well-known researcher degrees of freedom (think estimator choice, model specification, choice of controls, weights, outliers, etc) can drastically influence findings. 2/11

2 weeks ago 1 0 1 0
Post image Post image

📄Published Today in Nature:

500 researchers reproduced 100 studies across the social & behavioral sciences to assess their analytical robustness (led by @balazsaczel.bsky.social & @szaszibarnabas.bsky.social).

Article: www.nature.com/articles/s41...

Preprint: osf.io/preprints/me...

TLDR: 1/11

2 weeks ago 91 48 2 4
Post image

Just read the abstract 🫠 via Alexander Magazinov. I don't believe he is on Bluesky.

1 month ago 122 25 15 49
OSF

When you collect data online, are the results from humans or AI? In a project led by Booth PhD student Grace Zhang, we estimate the prevalence of AI agents on commonly used survey platforms:
osf.io/preprints/ps...
🧵

1 month ago 110 50 4 5

@areiljan.bsky.social though it doesn't look like he is very active on bluesky.

1 month ago 1 0 0 6
Advertisement
Screenshot of claude just writing a design no trouble

Screenshot of claude just writing a design no trouble

Writing simulations in DeclareDesign just went from "I should do that, but it's kind of a lot of work" to extremely easy

1 month ago 62 10 4 2
Post image Post image

Now out in the American Sociological Review

We present the first large-scale assessment of the structure and evolution of temporalities expressed in U.S. climate change news coverage (2000 to 2021). For this, we analyzed more than 23,000 statements about climate change effects and actions. 🧵 1/

1 month ago 71 26 2 0
An image of the schedule with speaker images. You can find the full schedule on tada.cool.

An image of the schedule with speaker images. You can find the full schedule on tada.cool.

🚨 TADA Speaker Series Spring 2026 schedule is here! 🚨

We've assembled a fantastic lineup of researchers exploring the future of survey research in the age of LLMs.

Mar 18 - May 27, online at 17:00 CEST. Join us!

More info & signup: tada.cool

1 month ago 38 23 0 1

🧺 Paper Picnic 2.0 is here! More journals. New features. An easier way to keep up with the latest research in political science and adjacent fields. 🧵👇

1 month ago 72 32 1 2
Post image

1/ Sorry for double-posting from X. Sharing a new working paper for the Year of the Horce 🐎:

"An AI-assisted workflow that scales reproducibility in empirical research" (bit.ly/repro-ai) w/ Leo Yang Yang

2 months ago 77 27 4 6