I took biochem in 2001, and for nearly 20 years read amino acid sequences daily… and I never knew Dayhoff named them or even the logic behind things like Q until last Friday (h/t Mike Janech). Also, this is another big Dayhoff moment for me. She was incredible!
#proteomics #bioinformatics
Posts by Karl Krauth
Always so impressed by how good this intro to graph neural nets is. They did such a good job of broadly covering the field without diving into a million papers. I love that they build intuition for how designing GNN architectures is tricky, wish more ML posts did that.
You could run nvidia-smi in the terminal while your model is running to see if your vram is full.
Ah I wasn't saying that your pcie bandwidth would be abnormally low but rather that you might not be able to fit everything into the rtx4090s vram and so you'd have to make a lot of transfers between cpu ram & vram which is slow and would leave the GPU waiting for data most of the time. :)
This feels like it's just due to the rtx4090 being pcie bandwidth limited for some reason. The peak fp16 compute for an m4 max should be 34 tflops while the rtx is 82 tflops and the vram bandwidth is twice as fast in an rtx.
What happens if you run the model with a smaller resolution img or fp8?
Not restricting it to the fully de novo case, even an example where a model makes a few mutations in a wild-type sequence is fine.
Totally agree that all the work showing some activity in de novo sequences is super impressive.
Intentionally didn't want it to be too high a bar. A single substitution is totally fine as long as the model isn't constrained to mutate a few clearly impactful residues in the active site for example.
I haven't been able to find a paper that:
1. uses ML to propose enzyme sequence
2. measures kcat of designed enzyme and highly similar sequence in the train set
3. shows that the designed enzyme is faster than all other enzymes that catalyze the same reaction in the train set with a known rate
Can't be ruled out 100%, but some designed sequences are going to be more plausibly out-of-distribution than others.
Depends on how big your dataset is. I'm fine with a training set that includes all naturally occuring sequences where most don't have associated kcats for example. I just want to avoid cases where you can just pick a wild-type protein from the same family to get an improved enzyme.
Has a machine learning model ever successfully designed an enzyme that's 5x faster than the sequences in its training set?
Specifically looking for an experimentally verified example where the model is the decision maker rather than a human assistant.
Would love to be added to this. :)
I'd love to be added. I create microfluidic devices and large-scale datasets which I use to train protein language models.