I regulate effort by my ability to follow technical concepts instead of nose breathing
Posts by Spencer Boucher
Zone 2 indoor trainer sessions do double duty as SciPy video backlog sessions
4/ There must be existing terminology for this but the whole Rumsfeld matrix thing doesn't quite feel like the right fit and also, you know, Rumsfeld.
3/ I don't know how long this dichotomy holds up. Nothing in principle stops models from eventually asking all the questions you should be asking. But right now it feels important - might be time to let AI handle one kind of knowledge and experiment with the spaced repetition world for the other.
2/ If you don't know that collider bias is a thing, you'll run a meaningless regression and be confident in a wrong answer. You won't ask AI to help because from where you're standing there's no problem. That's knowledge-about β and AI can't give it to you unprompted.
1/ I'm noodling on the increasingly interesting dichotomy between knowledge that needs to be *known* and knowledge that needs to be *known about*. AI is rapidly commodifying the first kind, but the second kind is harder to commodify. Its value is in the awareness, not the content.
Also soliciting better names
Technically its zero new lines of code, just
```
with cp.wrap(eight_schools, remote=True, cache="disk"):
idata = pm.sample(draws=2000, tune=1000, chains=4)
```
Built something that is genuinely useful to me, and might be to others in the world of #PyMC and #BayesianStatistics. @pymc.io
cloudposterior: Run your PyMC models on cloud VMs with one extra line of code. Get results cached, progress streamed to your notebook, & notifications pushed to your phone
For all of human history, writing something has been at least an order of magnitude more difficult than reading something. One of the very many paradigm shifts of generative AI is that the exact opposite is now true. I havenβt heard many people grappling with the implications from this perspective.
Can you link to your tmux config? I agree that cmux seems a step down from a well configured tmux. I'm especially interested in the fzf integration
Yes, but it's the difference is that code is *procedural* and specs are *imperative*. The same way SQL opens up databases to less technical users via SQL, AI opens up everything else via specs. The difference between saying what you need and what you want is qualitative.
The entire post focuses on implementation rather than justifying the idea conceptually. Would love to hear what statisticians like @statmodeling.bsky.social and @rmcelreath.bsky.social think. Presumably I'm missing something and Amazon knows what it's doing?
The point of an experiment is to close the backdoor; this opens it up again (feels like actually yelling "hey you , YEAH YOU, wanna come in?"). If you assign deal-seekers to the "Free Shipping" variant because similar users converted more, you can never identify the causal effect of the variant.
AWS published a blog on "AI-powered A/B testing" where an LLM selects which variant to show users based on their device, referral source, behavioral profile, and similar-user clusters. At first blush I can't help but think this is a terribly misguided idea. Hopefully I'm missing something.
There's a new race on to "skillify" all your most nuanced and well-guarded expertise and workflows. But agent "skills" are literally just the same textual explanations that humans have always needed. Why did we not take writing this stuff down seriously until it was for machines, not other HUMANS?