Advertisement · 728 × 90

Posts by Karl Krauth

Dr. Margaret Oakley Dayhoff

I took biochem in 2001, and for nearly 20 years read amino acid sequences daily… and I never knew Dayhoff named them or even the logic behind things like Q until last Friday (h/t Mike Janech). Also, this is another big Dayhoff moment for me. She was incredible!

#proteomics #bioinformatics

1 year ago 198 79 14 7
Preview
A Gentle Introduction to Graph Neural Networks What components are needed for building learning algorithms that leverage the structure and properties of graphs?

Always so impressed by how good this intro to graph neural nets is. They did such a good job of broadly covering the field without diving into a million papers. I love that they build intuition for how designing GNN architectures is tricky, wish more ML posts did that.

1 year ago 14 0 0 0

You could run nvidia-smi in the terminal while your model is running to see if your vram is full.

1 year ago 7 0 0 0

Ah I wasn't saying that your pcie bandwidth would be abnormally low but rather that you might not be able to fit everything into the rtx4090s vram and so you'd have to make a lot of transfers between cpu ram & vram which is slow and would leave the GPU waiting for data most of the time. :)

1 year ago 1 0 1 0

This feels like it's just due to the rtx4090 being pcie bandwidth limited for some reason. The peak fp16 compute for an m4 max should be 34 tflops while the rtx is 82 tflops and the vram bandwidth is twice as fast in an rtx.

What happens if you run the model with a smaller resolution img or fp8?

1 year ago 1 0 1 0

Not restricting it to the fully de novo case, even an example where a model makes a few mutations in a wild-type sequence is fine.
Totally agree that all the work showing some activity in de novo sequences is super impressive.

1 year ago 0 0 0 0

Intentionally didn't want it to be too high a bar. A single substitution is totally fine as long as the model isn't constrained to mutate a few clearly impactful residues in the active site for example.

1 year ago 0 0 0 0

I haven't been able to find a paper that:
1. uses ML to propose enzyme sequence
2. measures kcat of designed enzyme and highly similar sequence in the train set
3. shows that the designed enzyme is faster than all other enzymes that catalyze the same reaction in the train set with a known rate

1 year ago 0 0 1 0
Advertisement

Can't be ruled out 100%, but some designed sequences are going to be more plausibly out-of-distribution than others.

1 year ago 0 0 1 0

Depends on how big your dataset is. I'm fine with a training set that includes all naturally occuring sequences where most don't have associated kcats for example. I just want to avoid cases where you can just pick a wild-type protein from the same family to get an improved enzyme.

1 year ago 0 0 1 0

Has a machine learning model ever successfully designed an enzyme that's 5x faster than the sequences in its training set?
Specifically looking for an experimentally verified example where the model is the decision maker rather than a human assistant.

1 year ago 2 0 2 0

Would love to be added to this. :)

1 year ago 1 0 1 0

I'd love to be added. I create microfluidic devices and large-scale datasets which I use to train protein language models.

1 year ago 1 0 1 0