that is indeed a bad take
Posts by Ryan Briggs
No she’s right
Me just finishing reading a kid version of Wizard of Oz to 3 year old: “There’s no place like home.”
“No”
“Is home your favourite place?”
“No”
“What is your favourite place?”
“Hot pot”
Yeah x.com/ryancbriggs/...
The more that folks outside of science hear of this, the more they should ask: why?
open.substack.com/pub/joshuasw...
There are no longer any great options in the twitter-ish social media space and it’s sad
Feel free to borrow/steal---the source is here: github.com/pos5747/notes
The goal is a *still relevant* course on parametric models that can sit alongside a course on more agnostic methods for causal inference.
*These are in-progress and written in the pre-Claude Code era.
fr
chag sameach!
A screenshot showing: Introduction These are notes for my class on probability models. In these notes, I walk through the concepts and computation that support modern probability modeling in political science using both maximum likelihood and Bayesian approaches. The Goal There are many excellent books on probability models. But I felt the need to write my own. Why? I saw three problems. First, some classes assign a huge textbook. It might be possible for the strongest and most motivated students to become familiar with the range of topics covered in these textbook, but impossible to master. Instead, these textbooks seem like references, something you’re supposed to constantly be referring back to throughout your career. I know this because many of these books have instructors’ guides that suggest what should be covered in a single semester, what should be skipped, and how one might jump around. Instead, I want a book that students can work through beginning to end and master each idea. Second, some classes assign a variety of sections from several books and a collection of articles. But then the story told in the readings isn’t coherent. The styles are changing, the author’s tastes are changing, and the notation is changing. Switching among authors can feel like whiplash when learning a difficult subject. Instead, I want a book that tells a continuous story with consistent style, tastes, and notation. Third, some classes assign readings that support the lecture material, without exact alignment between the two. For better or worse, the content covered by the instructor in class feels like the most important material. Thus, I want a book that exactly aligns with the material I cover in class.
You guys @carlislerainey.bsky.social has a free textbook online and it seems really useful pos5747.github.io/notes/
probably a good time to share this again: asteriskmag.com/issues/10/ca...
A massive seven-year project exploring 3,900 social-science papers has ended with a disturbing finding
go.nature.com/4bZ9k0W
📄Published Today in Nature:
500 researchers reproduced 100 studies across the social & behavioral sciences to assess their analytical robustness (led by @balazsaczel.bsky.social & @szaszibarnabas.bsky.social).
Article: www.nature.com/articles/s41...
Preprint: osf.io/preprints/me...
TLDR: 1/11
🧵1/ Our first meta-science paper (with 350+ coauthors) is published today in Nature. It presents one of the largest-ever reproducibility projects in economics & political science.
Here’s what we found 👇
Today is the day that a lot of social scientists get their first Nature publication
🙏🙏 The hair is the real win
I know how to solve the marriage and birth rate crisis and the good news is that it’s delicious
Exactly right. It’s passionfruit curd.
Homemade scone with passionfruit curd on one half and clotted cream and raspberry jam on the other.
You can just make scones
EA adjacent behaviour
And I agree with this bsky.app/profile/adam...
Frankly, I’m confused about the confusion here.
My attempt at articulating why I think LLMs are a clear net positive for all research, and it's a mistake to ignore it despite private concerns:
paulgp.com/2026/03/16/r...
Did we not already have RLHF when the parrot paper came out?
Thank you Claude
I want your bubble!
OK, I’m convinced. I agree that parrots are cool.
Is it possible that we disagree because you forget the original paper that introduced the term "stochastic parrot"? I realize that's a tempting argument for me to make (my critics are ill informed, lol), but maybe it's true? bsky.app/profile/ryan...
RL on e.g. math problems means they are not trained to merely predict the next word "according to probabilistic information about how they combine". They are trained to get the right answer to problems. This is a really critical difference in my view.
the original paper that introduced it said LLMS are "stitching together sequences of linguistic forms... observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning." This understanding is what I'm pushing back on.