Advertisement · 728 × 90

Posts by James E. Pustejovsky

The machines are fine. I'm worried about us. On AI agents, grunt work, and the part of science that isn't replaceable.

Completely on-point analysis from @minaskar.bsky.social. Required reading for every astrophysicist (and scientist).

2 weeks ago 12 3 0 1

No doubt. But I think the submission guidelines are still most authoritative source for policy statements. The policy needs to be publicly stated, not buried in a form behind editorial manager.

3 weeks ago 0 0 0 0

Personally, I think journals should also put in place and enforce very serious consequences for undisclosed LLM use, such as year+ bans on submission to the journal.

3 weeks ago 0 0 0 0

I’m an AE at another APA journal and am lobbying to update submission guidelines there. I don’t think it’s adequate to just link to the general APA policy because it’s too ambiguous as to whether the editor actually takes the policy seriously.

3 weeks ago 1 0 2 0

I think it would be useful to communicate the policy more prominently in the submission guidelines. Not to make excuses, but I think authors look more carefully at those guidelines than at the submission form (which some might not review before going to submit, when they’re eager to hit submit).

3 weeks ago 1 0 1 0

I'm going to be consulting this frequently over the coming years both for my research and editorial work. A tremendously valuable effort.

1 month ago 2 0 0 0

Reviewing a manuscript on a topic that I'm very interested in.

And concluding that it is mostly (if not wholly) AI slop.

Don't know whether to laugh at the ridiculous tone or despair for the future of peer review.

1 month ago 4 0 0 0
Post image

While working full-time & raising a child, Grace Wahba earned advanced degrees, completing her PhD at Stanford and becoming the first female faculty member in statistics at the University of Wisconsin-Madison. #womenshistorymonth #statwomen magazine.amstat.org/blog/2026/03... #statssky

1 month ago 22 11 1 1
Advertisement
GitHub - coatless-tutorials/convert-shiny-app-r-shinylive: Demo showing how to setup continuous integration deployment of an R Shinylive App on GitHub Pages through GitHub Actions Demo showing how to setup continuous integration deployment of an R Shinylive App on GitHub Pages through GitHub Actions - coatless-tutorials/convert-shiny-app-r-shinylive

Ive started puttering with shinylive/webr: github.com/coatless-tut...

1 month ago 5 1 0 0
Design & Analysis of Quasi-Experiments for Causal Inference – James E. Pustejovsky Education Statistics and Meta-Analysis

jepusto.com/teaching/Qua...

1 month ago 1 0 1 0
Post image

‼️ Postdoc recruitment

Want to help build and understand the future of scientific collaboration? We are seeking a postdoc in computational meta‑science.

📍 UF (Gainesville, FL)

💰 $55–60k (1-3 years)

🧠 High intellectual agency

📅 Deadline March 10

Send us your idea. Details attached!

1 month ago 11 18 1 2

I bet that many faculty and/or alumni from institutions that had training grants would chip in to get something like this together. (I’m a Northwestern alum and faculty at UW Madison.)

1 month ago 3 0 1 0
Cluster-Robust (Sandwich) Variance Estimators with Small-Sample Corrections Provides several cluster-robust variance estimators (i.e., sandwich estimators) for ordinary and weighted least squares linear regression models, including the bias-reduced linearization estimator int...

Also this is basically the same thing as the CR3 cluster-robust standard error, implemented in clubSandwich: jepusto.github.io/clubSandwich/

1 month ago 4 0 1 0
Visiting Poverty Scholars Program, 2026-2027
The Institute for Research on Poverty is calling for applications for its Visiting Poverty Scholars Program.

The Visiting Poverty Scholars program funds up to four poverty scholars per year to visit IRP or any one of its U.S. Collaborative of Poverty Centers (CPC) partners for five days in order to interact with its resident faculty, present a poverty-related seminar, and become acquainted with staff and resources. Visiting scholars will confer with a faculty host, who will arrange for interactions with others on campus.

The application deadline is 11:59 p.m. Central on Friday. April 3, 2026 

Eligibility: Applicants must be PhD-holding, U.S.-based poverty scholars at any career level who are from economically disadvantaged backgrounds.

Visiting Poverty Scholars Program, 2026-2027 The Institute for Research on Poverty is calling for applications for its Visiting Poverty Scholars Program. The Visiting Poverty Scholars program funds up to four poverty scholars per year to visit IRP or any one of its U.S. Collaborative of Poverty Centers (CPC) partners for five days in order to interact with its resident faculty, present a poverty-related seminar, and become acquainted with staff and resources. Visiting scholars will confer with a faculty host, who will arrange for interactions with others on campus. The application deadline is 11:59 p.m. Central on Friday. April 3, 2026 Eligibility: Applicants must be PhD-holding, U.S.-based poverty scholars at any career level who are from economically disadvantaged backgrounds.

#FundSocSci
www.irp.wisc.edu/visiting-pov...

1 month ago 9 6 0 0
Map showing “One-year change in ZIP Code home prices between January 2025 and January 2026” with Wisconsin seeing some of the highest increases

Map showing “One-year change in ZIP Code home prices between January 2025 and January 2026” with Wisconsin seeing some of the highest increases

it’s almost like Wisconsin needs a statewide housing strategy…

2 months ago 35 10 4 0
Advertisement
Resources for Supporting Postsecondary Education Randomized Controlled Trials | MDRC

Thinking of running an RCT in postsecondary education?

MDRC has created a fantastic set of resources to help you in projecting minimum effect sizes, randomizing, and processing data

Proud to have helped advise this project!

www.mdrc.org/the-rct

2 months ago 47 21 1 1
Aerial photo of Madison’s state capital building with both sides of the isthmus visible

Aerial photo of Madison’s state capital building with both sides of the isthmus visible

Madison, Wisconsin — 2026

2 months ago 81 12 1 1
Using Extant Data to Improve Estimation of the Standardized Mean Difference - Kaitlyn G. Fitzgerald, Elizabeth Tipton, 2025 This article presents methods for using extant data to improve the properties of estimators of the standardized mean difference (SMD) effect size. Because sampl...

Katie Fitzgerald and Beth Tipton (@statstipton.bsky.social) make a similar argument here: doi.org/10.3102/1076...

2 months ago 4 0 2 0

(This is not solely about meta-analysis, either. I would argue the same if a field relied on narrative / interpretive review methods.)

2 months ago 0 0 0 0

But I think it is critical that journals very carefully consider how their selection criteria might distort the published record in a way that hinders the systematic accumulation of evidence.

2 months ago 0 0 2 0

I think we could agree that there's no need for journals to publish poorly conducted studies, e.g., where assignment to condition was haphazard, where implementation of an intervention was compromised, where there were major confounds, where instrumentation was bad, etc.

2 months ago 1 0 1 0

Things that I did not assert and that I would not argue for:
1) that journals should publish all studies ever done
2) that journals should be indifferent to the nature of the evidence.

2 months ago 1 0 2 0
Advertisement

My argument was that the point of journals should be to curate the scientific record, and that this requires using systems of evaluation that allow for accumulation of evidence across individual studies.

2 months ago 1 0 1 0

Relevant for both yes, but I worry about the cure being worse than the disease. Sample reliability coefficients are noisy, so I think it's not obvious that one should routinely use them for artifact correction (for r or for d).

2 months ago 1 0 1 0
Sage Research Methods - Methods of Meta-Analysis: Correcting Error and Bias in Research Findings <p>Designed to provide researchers clear and informative insight into techniques of meta-analysis, the Third Edition of Methods of Meta-Analysis: Correcting Err

Hunter & Schmidt (2007, methods.sagepub.com/book/mono/me...) describe this as the artifact of direct range restriction. Much more well known for correlations, but your example is a great illustration that the issue is relevant for SMDs too.

2 months ago 4 0 1 0
Effect Sizes in Cluster-Randomized Designs - Larry V. Hedges, 2007 Multisite research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Researchers would like to...

What about Hedges (2007, doi.org/10.3102/1076...)? He describes several different ways of defining SMDs for cluster-randomized experiments, though in practice I've only ever standardized by total variance.

2 months ago 1 0 0 0

I agree with your main point that d = 22 is ridiculous in substantive terms and should not be included in a meta-analysis. But I would also note that this is partly because there is no universal SMD metric. There are many different ways of defining SMD, which are not all commeasurable.

2 months ago 2 0 1 0

which will usually be only a small part of the total variation in scores. I would think that the GRIM calculations would need to take this into account to determine whether a set of reported scores are plausible or not. Does your PubPeer comment do so? I couldn't tell from what you wrote.

2 months ago 2 0 1 0

In this article, the Ms and SDs in Table 2 are calculated by first averaging the individual scores at the classroom level, and then taking M and SD across classrooms (of which there were only a few per condition). So, roughly, the SD in the SMD is based only on between-classroom variation...

2 months ago 1 0 1 1
It must be very hard to publish null results
Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

It must be very hard to publish null results Publication practices in the social sciences act as a filter that favors statistically significant results over null findings. While the problem of selection on significance (SoS) is well-known in theory, it has been difficult to measure its scope empirically, and it has been challenging to determine how selection varies across contexts. In this article, we use large language models to extract granular and validated data on about 100,000 articles published in over 150 political science journals from 2010 to 2024. We show that fewer than 2% of articles that rely on statistical methods report null-only findings in their abstracts, while over 90% of papers highlight significant results. To put these findings in perspective, we develop and calibrate a simple model of publication bias. Across a range of plausible assumptions, we find that statistically significant results are estimated to be one to two orders of magnitude more likely to enter the published record than null results. Leveraging metadata extracted from individual articles, we show that the pattern of strong SoS holds across subfields, journals, methods, and time periods. However, a few factors such as pre-registration and randomized experiments correlate with greater acceptance of null results. We conclude by discussing implications for the field and the potential of our new dataset for investigating other questions about political science.

I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.

2 months ago 644 222 30 52
Advertisement