Advertisement · 728 × 90

Posts by Andy Timm

IIRC, test-retest reliability isn’t great, and the concepts only indirectly map onto psychometrically valid constructs. There’s also some weirdness in emphasizing dichotomizing e.g. E vs. I when the underlying construct has a lot of middle ground most people occupy.

(Not my area of expertise, tbc)

2 weeks ago 2 0 1 0
Post image
2 weeks ago 1 0 0 0
Millennial ass TF2 meme about using more bayes

Millennial ass TF2 meme about using more bayes

3 weeks ago 5 1 0 1

It’s a bit of both from poking around-

They have some misleading language about certain questions not leveraging AI responses, but there are a handful which do have synthetic responses.

So they’re both filling out quotas + making the weighting work + actually generating some select responses.

3 weeks ago 2 0 1 0

Realizing I'm likely wrong above, apologies. (There's a lot that's odd here!)

TL;DR is some questions do have synthetic responses, so I assume this is to provide more "data" on those and flesh out crosstabs.

4 weeks ago 0 0 0 0

I was poking a bit more, I actually think it's to make a handful of the questions/"crosstabs" possible?

Sorry for the bad explanation, a bit spoiled for choice on explanations of why this is weird.

4 weeks ago 2 0 1 0

hrm, realizing I'm wrong about the weights stabilization thing: I think it's to make the crosstabs ("crosstabs"?) viable? Ick.

I had a Claude rake this and you can get it to converge to their target pretty ok. Deff is huge, but like not shokcing.

4 weeks ago 1 0 0 0

I'm sure I could find more hilariously weird stuff in here, but instead I'm gonna go touch grass.

If someone tries to look at how fucked the raw data from pollfish is, or learns other fun facts about their synthetic respondents, I'd be curious to hear about it lol

4 weeks ago 0 0 2 0
Better and worse ways to mix human and LLM responses in behavioral research (but you still have to figure what you’re measuring) | Statistical Modeling, Causal Inference, and Social Science

I'll note there exist kinda interesting estimators that attempt to combine human + synthetic sample in vaguely plausible ways (e.g. PPI/rePPI), or ways to attempt to make more realistic synthetic respondents (e.g. subPOP); this is ~none of that.

This just seems like janky-ass weights stabilization?

4 weeks ago 1 0 1 0
Advertisement

They posted the raw data for this, fascinating:

Notes so far:
1. It looks like these responses exist to make the weights converge, given the synth respondents have NA for non-weighting Qs?
2. The MoE excludes the synthetic respondents. I guess that's better than treating them as 1:1 for humans?

4 weeks ago 4 2 1 1
Nutpie

Thanks to the lovely r-universe project, I’ve been able to provide pre-compiled bindings for most platforms, simplifying installation.

None of this would be possible without @aseyboldt.bsky.social / @pymc-labs.bsky.social ‘s great nutpie, thanks for the great sampler: pymc-devs.github.io/nutpie/

1 month ago 3 0 0 0
Nutpie: state-of-the-art mass matrix adaptation for HMC | Statistical Modeling, Causal Inference, and Social Science

Nutpie is ~2x faster on average than the base Stan sampler on tasks in posteriorDB, though I’ve found that’s more like 5x for my more heavily used, more complex models.

Bob Carpenter has a great introductory blog post/paper with some PyMC folks introducing the sampler and explaining the speedup.

1 month ago 1 0 1 0
Preview
GitHub - andytimm/nutpieR Contribute to andytimm/nutpieR development by creating an account on GitHub.

Bayesian friends- if you’re curious to try out the blazingly fast nutpie sampler in R, I just put together a pretty lightweight package that’ll compile your existing Stan models!

1 month ago 3 0 1 0

r-universe is changing whether I intend to open source something with difficult dependencies.

Getting a package with rust dependencies working on CRAN is not my idea of a good time, or a good use my energy :)

1 month ago 0 0 1 0

I have a joke about bayes factors but nobody really thinks I should tell it

1 month ago 2 0 0 0

The repo name oh my lord lmao

1 month ago 2 1 0 0

Oh, other Jaynes. Briefly had a moment of “E.T. Jaynes could totally have a wild ass opinion on this, like his views on measure theoretic probability”.

I do want to read Julian though, he’s been been on my fin reading list for ages (regardless of how probable I find his views).

1 month ago 2 0 1 0

A big part of the way I think about survey weighting is through connections to causal inference. One thing that emerges from that perspective: regularized weights are pretty, pretty good!

I really like @shiraamitchell.bsky.social ‘s discussion of this connection here, esp the Johansson reference.

1 month ago 4 0 0 0
Preview
regrake: Regularized Raking in R – Andy Timm New year, new package release

New R package release day, woo :)

Regularized raking makes it easier to build complex survey weights that reduce bias without paying as heavy a high variance price. regrake makes building these weights convenient in R.

Check it out!

1 month ago 6 3 0 1
Advertisement

The number of ways/places the normal distribution can pop up occasionally spooks me.

Very strong “are you following me or do we just go all the same places” vibes

1 month ago 4 0 1 0

Ooh interesting, thanks for these.

1 month ago 1 0 0 0

This is also the face I make when doing bayes, very cool mr. coypu!

Looks like a cool, short bayes intro book!

1 month ago 3 0 0 0

gsood.com/research/pap...

This may be helpful on both fronts! + refs in the lit review

1 month ago 3 1 1 0

This is very strong work, and I want to say I especially appreciate the PRs given incentives in academia.

Thank you!

1 month ago 1 0 0 0

This gave me an idea: I wonder how well a Claude code skill implementing the “newbies checks” would do. Many of them are fussy, but squinting at them, many are things that I trust a strong LLM to check.

I have a new package I’m about to finish pushing to CRAN, will try this out/share if it works.

1 month ago 2 0 1 0

Obvious would be run the check as cran, take advantage of win builders.

Less obvious: assume you’ll need to iterate on your first submission(s). For more complex packages, assume that the process will involve some requirements you don’t personally find valuable or particularly well considered.

1 month ago 2 0 2 0
Advertisement
81st Annual AAPOR Conference

AAPOR program’s up. So much cool work this year- really stoked to hear about all of it!

I’ll be sharing some of GP’s work on bot/LLM detection, and also our work on survey experiment designs that manipulate attention in-survey to understand how noisier environments modify ad effects.

2 months ago 0 0 0 0

If polarization is interesting, Lily Mason’s uncivil disagreement or Neil O’brian’s The roots of polarization are both solid.

If you want a more hopeful account (who doesn’t right now…), Jon Meacham’s The Soul of America I found helpful, though it’s been a few years since I read it.

2 months ago 3 0 1 1

A few different directions-

1. For a comparative approach on democratic backsliding, “How Democracies Die” is great. The authors + Laura Gamboa both have good next reads .

2. For more “how we got here” party institutions wise, American Carnage and/or The Hollow Parties are great.

1/2

2 months ago 2 0 1 1

Might be better than my current strategy of attempting to nerd snipe an econometrician friend who also runs with random questions.

“How likely to replicate do you think the studies about rotating shoe pairs between runs are?” <— basically a box trap for my people

2 months ago 2 0 1 0