Advertisement · 728 × 90

Posts by Sam Power

almost-but-not-quite a GPU

2 days ago 0 0 0 0
Post image
2 days ago 6 0 0 0
Post image

very clever

2 days ago 0 0 0 0
Post image

can't help but read it as a bit mocking

2 days ago 16 0 4 0

bsky.app/profile/beta...

2 days ago 2 0 0 0

Learned that per Nils Lid Hjort, this is known as the "Eccentric Part of the Noncentral Chi Square".

2 days ago 0 0 0 0
Post image Post image

mass spreading out over time

3 days ago 1 0 0 0

I hope you're proud of that 4th line!

3 days ago 1 0 1 0

related to bsky.app/profile/spmo...

3 days ago 0 0 1 0
Advertisement
Post image

little bit of Lévy

3 days ago 8 0 3 1

Oh that's kind of fun!

3 days ago 2 0 0 0

(Seems like the only reasonable answer should be "donut", but I do wonder whether there's another name in use for the filled-in surface)

3 days ago 7 0 3 0

Sphere is to ball as torus is to ...?

3 days ago 5 0 2 0

Thank you!

3 days ago 1 0 0 0

So in a world where the standard primitive for RNG is exponentials rather than uniforms, it's potentially interesting enough, but as-is, it's more of a cute optimisation sub-problem.

4 days ago 0 0 0 0

I think it's pretty low-overhead in terms of lines of code, which is nice. It's arguably limited in that you need the stream of exponentials a priori, which is easy enough to get from a stream of uniforms by taking logarithms, but harder to do "purely algebraically".

4 days ago 0 0 1 0
Advertisement

As with lots of this stuff, one doesn't have to believe any model to derive these things or have them be useful (the formulas are 'just true'), but the filtering perspective is at least good at formalising the goals of the problem concretely, and opens things up to lots of simple generalisations.

4 days ago 0 0 0 0

Continuing to clean up lots of old miscellaneous notes on the laptop. I think this is a cute one: deriving a stable approach to computing means and variances in the streaming setting by a connection to filtering.

'Online Mean and Variance Computations by Kalman Filtering'
github.com/sampower88/P...

4 days ago 18 0 2 0

More miscellaneous rejection sampling stuff, based on the 'almost normal' distribution, as introduced in a pair of papers on exact sampling with the pretty-cute title of "Quantile Mechanics". Conclusion: it is indeed almost normal.

'The Almost-Normal Distribution'
github.com/sampower88/P...

4 days ago 16 2 1 1
Post image Post image Post image

Associated bits and pieces:

4 days ago 4 0 0 0

Another little note, this time on the task of transporting standard exponential random variables into (positive) standard Gaussians, in an 'algebraically simple' way. As hinted in an earlier post, it's quite possible to do efficiently!

"Exponential-to-Gaussian Sampling"
github.com/sampower88/P...

4 days ago 11 1 1 1
Preview
Fast simulation of truncated Gaussian distributions We consider the problem of simulating a Gaussian vector X, conditional on the fact that each component of X belongs to a finite interval [a_i,b_i], or a semi-finite interval [a_i,+infty). In the one-d...

Related to this one - arxiv.org/abs/1201.6140 ?

4 days ago 2 0 1 0

Going into it, I was wondering whether it was "just a good, brute-force implementation", but I think that I'm now more confident that it's genuinely interesting, and that one can say some neat things about it. (Though whether there are new things to say about it in 2026, I can't be certain.)

4 days ago 0 0 1 0
Post image

An afternoon's work: sitting with the Ziggurat method (known for being the go-to method for simulating 'standard' random variables "at scale"), and making sense of things:

"Rejection Sampling with the Ziggurat Method"
github.com/sampower88/P...

4 days ago 22 3 1 1

Very nice, Matti. I remember thinking about the Semi-Separable HMC idea a lot during my PhD (with some idea to extend it in the presence of other graphical model-type structure) and always felt it was a promising approach to the RM setup.

1 week ago 0 0 0 0
Advertisement

Moreover, the deficiencies of Chebyshev in this context have almost nothing to do with finite variance being a weak assumption, and everything to do with under-using the structure of independence.

1 week ago 2 0 0 0

One consequence of digesting the Catoni estimator is that if you're serious about forming good confidence sets for means on the basis of independent random variables of finite variance, then Chebyshev's inequality is unacceptably bad (if tractable).

1 week ago 6 1 1 0

bsky.app/profile/spmo...

1 week ago 0 0 1 0

A production of @adriencorenflos.bsky.social !

1 week ago 1 0 0 1

Thank you!

1 week ago 0 0 0 0