Advertisement · 728 × 90

Posts by David Slichter

Before the credibility revolution, everything was incredible.

1 day ago 14 3 1 0

Looking forward to the bunching conference tomorrow and Saturday! There will be explainer talks for people new to these methods. If you want to watch the talks remotely, there's a Zoom link on the program:
www.gregoriocaetano.net/resources/Bu...

4 days ago 0 0 0 0

I think it's great to want replications, but I agree that generally there is an awkwardly large gap between the tastes of applied people and econometricians.

6 days ago 1 0 0 0

I think they're recording the talks too. I'll try to remember to post a link after the conference if they do.

1 week ago 1 0 0 0

Revised program: All talks occur simultaneously at 9AM on the first day. Every participant chooses their own lunchtime but you get an extra bag of chips if you wait until noon.

1 week ago 1 1 0 0

You don't need to register or anything, you can just watch on Zoom!

1 week ago 0 0 1 0

The way I'd phrase it is I think that common trends is almost never exactly true, but often approximately true. Therefore I think of DiD as usually not consistent for any causal effect but nonetheless usually informative about effects.

1 week ago 4 0 0 0
Advertisement

In the spec with both versions, if someone only told you the coef on alternative treatment X', how would it change your guess about the coef on X? And does the coef on X' make sense? The body of evidence from a reg about the effect of X depends on more than the coef on X.

1 week ago 1 0 0 0

Is the difference that the SEs get smaller or that the coefficient shifts? If the two specs are a priori equally plausible, the best guess about the parameter will be a weighted average of the two estimates, with greater weight on the more precise spec.

1 week ago 1 0 2 0

bsky.app/profile/davi...

3 weeks ago 1 0 0 0

I'm struggling to imagine an informative sensitivity analysis where X is not even the primary reason why Z and Y are correlated.

3 weeks ago 0 0 1 0

The point is simply to reduce the role of framing when you think about IV validity. I 100% agree that there is too much emphasis on having research designs with exactly zero asymptotic bias, but the approach you describe still requires thinking clearly about all the reasons Z and Y are correlated.

3 weeks ago 1 1 1 0

I've personally never had that problem, maybe because I explain IV thoroughly before I get to this discussion. If it did remind students of proxy variables, maybe that would make it easier to explain IV as a solution to attenuation bias!

3 weeks ago 1 0 0 0

Oops, sorry, Figure XII is the one showing that people who prefer major j to k have a greater benefit from j relative to k than people who prefer k to j. You're right, the other one reflects combined effects of comparative advantage and pref for generically higher-paying majors.

3 weeks ago 2 0 1 0

I think it's more reasonable to think they're observed but with substantial noise. This means treatment is correlated with treatment effects, though imperfectly. Without any self-knowledge, it's hard to explain findings like Figure IX of this paper:
academic.oup.com/qje/article/...

3 weeks ago 1 0 1 0
Advertisement

Easier said than done, but: Unless the referees offer specific arguments which change your view of your work, you shouldn't change your view of your work.

3 weeks ago 0 0 0 0

That's great! They were friends and he got Krueger to give me some career advice before I went to grad school. My connectedness in the profession has been strictly downhill from there!

1 month ago 0 0 1 0

Awesome! He seems to know an oddly large number of PhD economists, not through me.

1 month ago 0 0 1 0

Yes, brother!

1 month ago 0 0 1 0

I recently learned that Great Divide has a track with a hidden meaning, too: I was listening to Delicious in the car with my four year-old when she suddenly announced "This song is about food. Vegetables."

1 month ago 1 0 1 0

Speaking personally, I wouldn't want to be BLUE, but being BLUP sounds very desirable.

1 month ago 3 0 0 0

I know someone who accidentally gave her (empirical) job talk without slides. There was a misunderstanding with the schedule: She thought she was giving a teaching demo instead of a job talk, but when she showed up for the time slot called "Q&A" it turned out to be a job talk.

1 month ago 4 1 1 0
Advertisement

I have a joke about rational expectations but

1 month ago 0 1 0 0

I have a joke about statisticians but it's too mean.

1 month ago 111 8 1 1
Post image

New preprint! We reanalyze 46 papers that use log-like specifications (ln(Z+1), inverse hyperbolic sine etc). We find widespread non-robustness, and we show through theory + simulation how these models drive spurious significance. 1/

doi.org/10.31222/osf...

1 month ago 12 3 1 2

We got an EV and it's hard to imagine ever buying a gas car again. The acceleration and lack of road noise make a big difference to the driving experience. I'd recommend an EV to anyone who can charge at home and doesn't often drive more than 2 hours/day.

1 month ago 2 0 0 0

Another good tip is to sign your reviews with someone else's name. "Your paper stinks! -Ed Glaeser"

1 month ago 1 0 0 0

From my experiences, when I've been able to do model tests, the sort of analysis you're describing ("we controlled for the obvious confounders") is more often than not basically fine. Maybe not zero bias, but small enough bias and variance to get pretty good estimates. YMMV

1 month ago 2 0 0 0

It's totally fine to file drawer a paper if you conclude that there is too much modeling and/or sampling error to learn anything about the original question. This is totally different from selectively hiding informative analyses.

2 months ago 1 0 1 0
Advertisement

Why stop at 100%?

2 months ago 0 0 0 0