Advertisement · 728 × 90

Posts by Moritz Schauer

Yes, that's the contrast of interest

The relevant observations are imo 1. MH is a causal inference problem first and a probability problem second, and 2. the association usually called collider bias is information you can make use of for decision rather than just being, as it normally is, a problem

5 months ago 4 1 1 0

I have not getting filter coffee in a coffee place and not getting gyros in a Greek restaurant until acing the local speech patterns… Don’t people have priors?? PS: ”Chyros? really, Dutch friends, chyros?”

3 weeks ago 1 0 0 0

Dust behaves as if someone put a minus sign in front of the Laplacian: dust ends up in the corners instead of spread out.

1 month ago 4 0 0 0

The key to a successful career is dying wealthy

1 month ago 5 0 0 0

Witnessing the birth of the marginal differential product theory about hiring your enemies

1 month ago 2 0 0 0

I like that people genuine think in intervals. Maybe there is hope to explain the confidence interval

1 month ago 1 0 0 0
Post image

This is basically my villain origin story.

"How old are you?" (unique responses)

1 month ago 60 10 11 0
GitHub - mschauer/Causality-Lecture: These slides are from a guest lecture on causal discovery. They show how independence patterns, Gaussian SEMs, and interventions constrain causal structure. No pri... These slides are from a guest lecture on causal discovery. They show how independence patterns, Gaussian SEMs, and interventions constrain causal structure. No prior causal inference background ass...

My slides are of course not a text book, but I link them here because they are opinionated that perhaps you can get away without GES or PC and get there by compute and a simpler hill climbing algorithm maximising the likelihood/ searching a MAP github.com/mschauer/Cau...

2 months ago 2 0 1 0
Nina Hagen - Du hast den farbfilm Vergessen (Subtitulado)
Nina Hagen - Du hast den farbfilm Vergessen (Subtitulado) YouTube video by PakoChile

For me for a song to click there must be specific harmonic patterns present, you have them in m.youtube.com/watch?v=EKe9... for example but also in the notorious C&A song m.youtube.com/watch?v=UFDn...

2 months ago 2 0 0 0
Advertisement

Well, me probably before figuring it out, lol

2 months ago 2 0 1 0
Nirvana - The Man Who Sold The World (MTV Unplugged)
Nirvana - The Man Who Sold The World (MTV Unplugged) YouTube video by NirvanaVEVO

And some point I had to notice how many songs I like are David Bowie covers and I suspect David Bowie is too genius for me. This one for example, I can understand it through Nirvana, which are probably also geniuses though www.youtube.com/watch?v=freg...,

2 months ago 5 0 2 0

3.4m² ? I wouldn’t have painted the underside

2 months ago 0 0 0 0

It’s not only easy to spot. It’s hard to unsee.

2 months ago 1 0 1 0

p=0.048, but in an enlightened way

2 months ago 1 0 1 0

If your niche is small enough, every post is a viral post.

2 months ago 1 0 0 0
Différance - Wikipedia

I think it’s not supposed to be spelled correctly en.wikipedia.org/wiki/Diff%C3...

2 months ago 1 0 0 0
Post image

wikipedia turns 25 today! the last unenshittified major website! backbone of online info! triumph of humanity! powered by urge of unpaid randos to correct each other! somehow mostly reliable! "good thing wikipedia works in practice, because it sure doesn't work in theory" - old wiki adage

3 months ago 12513 4013 95 304
Advertisement

Yeah, the Kalman gain K is the regression coefficient, so the conditional mean is old mean plus observations scaled by K. If you write K (H Σ⁻ Hᵀ + Σ_ε) = Σ⁻ Hᵀ you see how it is aligned with the normal equations A Σ₂₂ = Σ₁₂ from above.

3 months ago 1 0 0 0

In general, residuals of linear regression are only uncorrelated with the predictors, not independent, so their conditional mean need not vanish. Gaussianity upgrades uncorrelatedness to independence; once this happens, the linear predictor becomes the mean of the conditional distribution.

3 months ago 1 0 0 0
Preview
Deriving the conditional distributions of a multivariate normal distribution We have a multivariate normal vector ${\boldsymbol Y} \sim \mathcal{N}(\boldsymbol\mu, \Sigma)$. Consider partitioning $\boldsymbol\mu$ and ${\boldsymbol Y}$ into $$\boldsymbol\mu = \begin{bmatrix} \

Have a look here: stats.stackexchange.com/a/30600

The trick: choose A by the normal equation A Σ₂₂ = Σ₁₂ and see that X₁ − A X₂ is uncorrelated with X₂, and by Gaussianity also independent. So E[X₁∣X₂] = A X₂. Even works in the singular case.

3 months ago 1 0 1 0

At a technical university the steps of Pearl’s ladder are called stochastics, stochastic control and optimal transport

3 months ago 2 0 0 0

Yeah, more oil and less integrals

3 months ago 1 0 0 0

Mostly echoing your statement bsky.app/profile/p-hu... The do-operator formalizes how a system acts to interventions, so certain statements about interventions become propositions in a calculus, but you still have to argue how this maps to the system you want to describe.

3 months ago 1 0 0 0

Pearl is maybe also dismissive of this meta level, whereas people do make clean meta-level arguments for RCTs etc, in fact it is unavoidable, cf @p-hunermund.com

3 months ago 1 0 1 0
Advertisement

In classical approaches, correctness of causal claims is argued at the meta level, by appealing to design or understanding. In the do-calculus, that burden is shifted into a mathematical formalism.

3 months ago 2 0 0 1

Causal inference is often hidden in plain sight. In a randomized clinical trial, the setup is such that interventional and conditional distributions coincide.

That is E(X | do(T = t)) = E(X | T = t).

3 months ago 5 0 2 0

Love it. Adding Sid Meier's Beta Centauri

3 months ago 1 0 0 0
The ISBA Bulletin

REMEMBERING HARRY VAN ZANTEN

Botond Szabó and Aad van der Vaart in the ISBA Bulletin.

3 months ago 3 1 0 0

(and point null is the worst case for an error in the directional statements)

3 months ago 1 0 0 0

By the way, I am quite okay with users drawing directional conclusions after rejecting a two-sided null hypothesis; because the error rate under the point null is the same as that of the original test.

3 months ago 1 0 1 0