One of my favorite Bayesian modeling traps is that, in the standard parameterization of many families of probability density functions, the reasonable parameter configurations are often not between zero and some upper bound but rather between some lower bound and infinity.
Posts by Michael "Shapes Dude" Betancourt
As always, default prior models are bad and you should feel bad for using them.
One of my favorite Bayesian modeling traps is that, in the standard parameterization of many families of probability density functions, the reasonable parameter configurations are often not between zero and some upper bound but rather between some lower bound and infinity.
A little bit of Levy in my life
A little bit of Uhlenbeck by my side
A little bit of Ornstein is all I need
A little bit of Weiner is what I see
A little bit of Ito in the sun
A little bit of Stratonovich all night long
A little bit of Langevin I must fess
A little bit of noise makes it a process
Hello. Over on www.patreon.com/cw/betanalpha I recently released a chapter draft on Bayesian discrete choice modeling that spans over 200 pages of math and demonstrative exercises. This one is going to be exclusive for a bit!
I was mostly trying to match syllables; if one isn't being a mature adult then Norbert is pretty awkward in any of the lines. 🤣
A little bit of Levy in my life
A little bit of Uhlenbeck by my side
A little bit of Ornstein is all I need
A little bit of Weiner is what I see
A little bit of Ito in the sun
A little bit of Stratonovich all night long
A little bit of Langevin I must fess
A little bit of noise makes it a process
Hello. Over on www.patreon.com/cw/betanalpha I recently released a chapter draft on Bayesian discrete choice modeling that spans over 200 pages of math and demonstrative exercises. This one is going to be exclusive for a bit!
A label reading “GLOBAL BLEND WITH A MINIMUM OF 0% CALIFORNIA EXTRA VIRGIN OLIVE OIL”.
With a lower bound this weak you would think that a theoretical statistician made this olive oil.
Fair point!
This is all over physics (the Boltzmann and Plank constants make the arguments to a lot of exponentials unitless) but I was implicitly referring to more statistical modeling applications where one doesn’t model log(outcome) bit rather log(outcome / baseline outcome).
A label reading “GLOBAL BLEND WITH A MINIMUM OF 0% CALIFORNIA EXTRA VIRGIN OLIVE OIL”.
With a lower bound this weak you would think that a theoretical statistician made this olive oil.
Friendly reminder that one cannot formally take the logarithm, or really apply any non-linear function, to a unitful quantity. What one can do, however, is take the logarithm of the _ratio_, i.e log( x [units] / 1 [units] ).
Friendly reminder that one cannot formally take the logarithm, or really apply any non-linear function, to a unitful quantity. What one can do, however, is take the logarithm of the _ratio_, i.e log( x [units] / 1 [units] ).
I didn’t exceed my rate limit, I became non-stationary.
I didn’t exceed my rate limit, I became non-stationary.
̶p̶r̶o̶p̶h̶e̶c̶i̶e̶s̶ statistics documentation
̶p̶r̶o̶p̶h̶e̶c̶i̶e̶s̶ statistics documentation
Of course I catch the grammar misspelling milliseconds after hitting "Reply".
Red underline: incorrect spelling
Blue underline: incorrect grammer
Purple underline: incorrect statistical methodology
Reverend Frequent might be the most British sitcom name to ever British sitcom name.
Formally it’s lowercase. Not a judgment relative to big B Bayesian but rather, as colleague once noted, there was no Mr Frequent.
Not sure if case is tripping up the spell checker here.
Three figures, each quantifying the marginal posterior uncertainty in various one-dimensional variables using nested quantile intervals. The intervals are horizontal so that the marginal posterior visualizations can be stacked on top of each other vertically, allowing for more readable variable name labels.
Did you know that I maintain a suite of principled diagnostic and visualization tools for Markov chain Monte Carlo analysis in both R and Python?
github.com/betanalpha/m...
github.com/betanalpha/m...
I just updated the visualization tools to include a new plot variant.
Starting with Riemann and taking forever to get to Lesbegue? Story of my life.
Warning: www.ndbooks.com/book/on-the-... has nothing to do with measure theory. Blatantly false advertising.
The arc of history is long and has extremely poor convergence bounds
Sometimes.
Three figures, each quantifying the marginal posterior uncertainty in various one-dimensional variables using nested quantile intervals. The intervals are horizontal so that the marginal posterior visualizations can be stacked on top of each other vertically, allowing for more readable variable name labels.
Did you know that I maintain a suite of principled diagnostic and visualization tools for Markov chain Monte Carlo analysis in both R and Python?
github.com/betanalpha/m...
github.com/betanalpha/m...
I just updated the visualization tools to include a new plot variant.
People responding to statistical computation diagnostic warnings.
People responding to statistical computation diagnostic warnings.
Applied asymptotics.