Advertisement · 728 × 90

Posts by David Yan

undergrad RL at princeton - prereqs are introML, probability, linalg.
ben-eysenbach.github.io/intro-rl

3 months ago 1 0 1 0

Don't think Safari handles PDF figures well - usually is fixed by switching to .png ime.

4 months ago 0 0 1 0

Anything by qntm! He has several full-length novels and some free short stories to read on website:

qntm.org/vhitaos

6 months ago 3 0 1 0
Post image

Another good example from mono depth: DepthAnythingV2 uses a teacher supervised only on synthetic data (600k) and students are distilled from its predictions on web images (62M).
Real-world GT is noisy, so fitting to limited, but perfect synthetic data is better for teacher accuracy.

9 months ago 1 0 0 0
Post image Post image

Threw it at o3 and after thinking for 12 min (!!) it gave “O Canada” and A major. You can take a look at the full chain of thought here (chatgpt.com/share/6871b3...). Some highlights:

9 months ago 4 0 0 0

Maybe I'm not familiar enough with tokenizers, but is this different than just using a very small bottleneck dimensionality in SiamMAE? There seems to be something special about using precisely one token (specifically the [CLS] token?), but it's not immediately obvious why.

9 months ago 0 0 0 0