The NSF 2027 budget has noted that they will close out the Social, Behavioral, and Economic Science Program (SBE). This is not a good thing. nsf-gov-resources.nsf.gov/files/FY-202...
Posts by Luke Sanford
Thank you for all the support and reposts!
We've gotten a steady stream of inquiries and submissions for this competition, but also some ANXIETY that the window will close before people have a chance to submit.
We're nowhere near that! We'll update on here when we've allocated 50% of the capacity.
As usual, PECE is the day before APSA. We have a great venue and aim to showcase some of the most exciting research in this area.
Please share and let me know if you have questions!
Organized this year by:
Amanda Kennard, Dustin Tingley, and me!
Announcing the 2026 Political Economy of Climate and Environment (PECE) Conference!
When: 9/2/2026
Where: Harvard
CFP: www.pece-conference.org/cambridge-20...
Submit your papers or apply to attend: forms.gle/6y7YUoxJ6HTsz4cy8
Sponsored by @weatherheadcenter.bsky.social and the Sala Institute
@cesarbmartinez.bsky.social has some working papers in this area
@guygrossman.bsky.social I will send you a new one when we post it next week!
👇 Great review of climate papers published in 2025 !!
Abadie Diamond and Hainmueller
Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program
You can read find more about details at www.journals.uchicago.edu/doi/10.1086/... Thanks to all that helped along the way!
Omg I did this in graduate school and it was the worst.
Met @annaleen.bsky.social and got to spend some good time talking science and spec-fic and they were even more amazing and brilliant than I expected. They had just thought deeply about such interesting and important things and knew how to say things in exactly the right way.
📌
If your research involves RD designs, check out this important new working paper from Ghosh, Imbens, and Wager: "PLRD: Partially Linear Regression Discontinuity Inference" arxiv.org/pdf/2503.09907
Watching Jurassic Park, where a computer nerd with a debt problem and delusions of grandeur tears down all the safety systems, with no understanding of the consequences, so he can better facilitate his planned espionage and theft.
If Vladimir Putin had a plan to foul our air and water, wreck public health and drive America over the cliff of irreversible lethal climate change, it would look exactly like Lee Zeldin’s plan. This is a plan for self-inflicted environmental disaster.
www.theguardian.com/us-news/2025...
Here's what that image was supposed to look like:
satellite scanning trees on a hillside vs trees on flat ground, observing more trees in the hill than on the flat
We went for roads since it's easy to see how that measurement error could arise. Often we have no idea why RS + ML errors occur
@bstewart.bsky.social and co-authors explore the same issue in text measurement models like LLMs and find something similar--even small measurement errors can lead to large biases in downstream causal tasks when they aren't orthogonal to treatment
@sandysum.bsky.social and co-authors test a different method to correct for the same source of bias across many remotely sensed variables and ground truth data and find this everywhere (www.nber.org/papers/w30861)
Imagine you run a land tenure reform RCT, DV is tree cover. It turns out your treatment also causes more irrigated ag, which is mis-classified as treecover more often than rainfed ag (year-round greenness). Estimated treatment effect will be > that true treatment effect.
While that's our running example for the paper, definitely a broader issue here. We think assuming no correlation between measurement error and treatment is akin to the selection on observables assumption we usually require extraordinary evidence to believe. A couple examples below:
@jonproctor.bsky.social @vivianodavide.bsky.social @bstewart.bsky.social and others I can't find tags for
9/9
Other great work in this area: www.nber.org/papers/w30861, arxiv.org/abs/2501.18577, arxiv.org/abs/2411.10959, arxiv.org/abs/2306.04746 focus on “predict-then-debias”—the right move if using off the shelf data. But if you’re training the ML model yourself, give our adversarial approach a try!
8/9
Reach out if you want to debias some measurements in a particular application!
7/9
It’s easy to plug in any causal variable that might bias your ML-driven proxy. The adversary directly leverages your labeled data—so if you’re building custom measurement models with large-scale images (or text), you just tack on the adversary, retrain, and your bias vanishes.
8/9
6/9
We then use a labeled forest cover data from high-resolution imagery. When comparing the ML predictions to ground-truth labels, a naive model under-estimates forest cover near roads. Our adversarial model, by contrast, recovers unbiased estimates, giving more reliable coefficients.
5/9
We induce measurement error bias in a simulation of the effect of roads on forest cover. We show that a naive model yields biased estimates of this relationship, while an adversarial model gets it right.
4/9
We also introduce a simple bias test: regress the ML prediction errors on your independent variable. If nonzero, you have measurement error bias. If you run that test while gathering ground-truth data, you can estimate how many labeled observations you’ll need to reject a target amount of bias.