Advertisement · 728 × 90

Posts by Philip Bontrager

This thread is a bit long, but I thought it’d be interesting to share just one of the mundane parts of the deep learning stack that break and have to be rethought as models and training scale.

10 months ago 1 0 0 0

To save, you need to let each GPU save their own partial safetensors, because communication is slow, and then line up the memory blocks and merge into one file.

10 months ago 1 0 1 0

Safetensors are great for hosting checkpoints and make no assumptions about if your model is distributed by saving full unshared parameters. To work natively with safetensors, DCP needs to tell each GPU the exact slice of data to read without loading the full parameter.

10 months ago 0 0 1 0

On startup, DCP has to map your old GPU layout to your new one so each GPU knows which file to read from and only read the data they need. But there’s one last problem; when you’re ready to take your model to another tool (serving, eval, etc), it expects safetenors checkpoints.

10 months ago 0 0 1 0

Distributed Checkpoints (DCP) solve this by having every GPU save their own checkpoint asynchronously so you can save a checkpoint in less than a second. But this creates a new problem, the next time you want to use the model, you might have a different number of GPUs.

10 months ago 0 0 1 0

What goes into saving checkpoints is not something that many people think about, but as models get bigger this becomes a challenge. The biggest open models now have checkpoints over 700gb that can take tens of minutes every time you want to consolidate into a checkpoint.

pytorch.org/blog/hugging...

10 months ago 6 2 1 0

I’m enjoying it while it lasts before everything fully homogenizes again

1 year ago 1 0 0 0
Advertisement
Post image

We've built a simulated driving agent that we trained on 1.6 billion km of driving with no human data.
It is SOTA on every planning benchmark we tried.
In self-play, it goes 20 years between collisions.

1 year ago 298 55 22 8
Braess's paradox - Wikipedia

Aren’t these two paradoxes functionally the same? en.m.wikipedia.org/wiki/Braess%...

1 year ago 4 0 0 0
x.com

Original post here: x.com/jjitsev/stat...

1 year ago 5 0 0 0
Post image

In the Alice In Wonderland (github.com/LAION-AI/AIW) reasoning and generalization benchmark, DeepSeek R1 appears to perform much more like o1 mini than o1 -preview. (Plot from laion-ai)

1 year ago 4 0 2 0

What are the best benchmarks for reasoning models?

1 year ago 1 0 0 0

Can we just study LLM activations/behavior because it’s interesting and it can tell us things about language and AI without imbuing artificial importance or meaning on top of it?

1 year ago 2 0 0 0

Haha, that wasn’t lost on me. Facebook’s still going strong, but it’s a different site and users from when I was in HS.

1 year ago 2 0 0 0

If you can choose who follows you, that sounds more like “friends” from the old Facebook days.

1 year ago 2 0 1 0

I found out about Warp because I was on jury duty with one of their devs 😂 It’s been great compared to the Mac’s default terminal.

1 year ago 4 0 0 0
Advertisement

How do you add these?

1 year ago 2 0 1 0

Maybe let’s go the other direction and include blog posts in CVs too.

1 year ago 2 0 1 0

That would imply that we solved self-driving (image recognition) and search (language understanding), among other things.

1 year ago 2 0 0 0

This could be a good case for mixed models. The model parsing the text could likely be smaller or be fairly cheap like DeepSeek

1 year ago 1 0 1 0

Thankfully in a small startup you only have to sell an idea to a couple of people and you can get going.

1 year ago 0 0 0 0

One startup I joined had a model getting 95% on benchmarks but terrible in practice. Spent the first 6 months developing new benchmarks instead of a new model.

1 year ago 1 0 1 0

I always set out to propose a new idea and end up having to proposing a new benchmark instead

1 year ago 4 0 1 0
Advertisement

What if humanity knows X and wants to understand Z. If a computer can give us Y so that we can understand Z, that would be useful for science. Though I’d say that we still didn’t know Y ourselves yet.

1 year ago 0 0 0 0

Imagine if under the hood O1 is just calling “write better code” over and over again 😂

1 year ago 5 0 0 0

I posted about this recently. Benchmarks show what models can’t do, not what they can do.

1 year ago 1 0 0 0

Plagiarize other people’s research

1 year ago 1 0 0 1

Imagine being an editor for an LLM, so much work with low confidence that you’ll have something interesting in the end.

1 year ago 3 0 0 0

I remember a lot of focus being on the loss function. My impression was that we thought we had models that would work well if only we had a good perceptual loss to train them with. In comes the GAN

1 year ago 0 0 0 0

Base models are closer, but they’re still affected by the company’s decisions on which data to filter out and more indirectly on what data is given free hosting on the internet.

1 year ago 0 0 0 0
Advertisement