The link I posted is NovoGlyco and the other is SugarBase? I haven’t had a chance to do more than glance at em!
Posts by Hiren Joshi
Did you post this one already? www.biorxiv.org/content/10.6...
Its a cool technique. I wonder if you took a meta approach to this (mix of microbes), would you ever hit saturation for identified nucleotide sugars? For sure in animals it saturates? But is my hunch right that prokaryotic space is infinite (but glycan complexity is low)?
It all makes chemical sense, and one might say: well, why not? Given that the amino-acid residues in a protein can form highly specific H-bond arrangements - that's often what they use for their selective binding and catalysis - why shouldn't those be arranged to mimic a nucleotide base? /14
That’s a cool question. How little protein material (or few PSMs) will you need based on current mass spec sensitivity to uniquely identify a kingdom, class, species or individual? Does this vary depending on diversity in each grouping? How does this compare to genetic sequencing for identifying?
An April Fools joke about a "discovery" of fully glycosylated molecules.
We are proud to announce the discovery of Glyco-Carbohydrates, a paradigm shift in glycobiology! Traditional glycoconjugates (proteins, lipids & RNA) max out at 50% carbohydrate.
Ours are 100% carbohydrate AND glycans, unlocking truly homogenous properties.
TBA April 1st, 2027! 🍬 #glycotime
I say this as a guy that put a TUI on a wasm app otherwise delivered in a browser during this week because hey, why not?
Look, I haven’t seen the code, maybe Claude helped out implementing it, maybe it was hand crafted. It doesn’t matter to me. The fact that software is fun and quirky again makes me happy.
The three enzymes ALG3, ALG9, and ALG12 catalyze the synthesis of the branched mannose core in N-glycans in four distinct steps. Now, their structures reveal the molecular logic of oligomannose core assembly #glycotime
www.nature.com/articles/s41...
I kinda am attached to a system with preprint/open publishing, and then doing private/semi closed society reviews. If you wanna go crazy, societies can sell subscriptions to the reviews if they are high quality enough? Dangerous? Who knows what weird behaviour these incentives will drive..
I often find myself thinking that for a lot of papers the thinking is poor anyway, and I wish they spent more time making the data usable. While their conclusions might be right/wrong, finding ways to check the data itself is perhaps more valuable in the long run.
The problem is that it is not the job of the journals to maintain standards, their goal is to publish the highest “value” stuff. That there may be a disconnect between publisher value and scientific value is the issue.
Maybe, in the future there is a guild that reviews entirely by LLM? The point is that you want a mechanism to obtain diverse reviews of work (identified by URI) from different angles, which you can take/leave rather than this artificial bottleneck that serves only value signalling.
When I say “society”, I am thinking more on the level of a guild: a closed group with a known set of standards/biases. It then becomes an output for the guild to tell the world what it thinks about things. It is up to everyone else to decide, based upon track record, if the guild is useful.
This will be great if it comes to fruition - I have this dream that we can eventually ditch journals, and move to collections of communities (society review) for peer review. A system of closed review, that then opens up when finished will be amazing, and could be built with this.
To think, instead of wasting my life trying to understand normalisation of quantification between samples, I could have instead accelerated us towards dystopia or human apocalypse, while getting rich (natch).
I am always excited to try out whatever latest stuff his lab produces - tools that are not only built to solve tricky problems, but also just well built tools that work nicely.
Finally, a detailed review about C-mannosylation: mechanism of C-mannosyltransferases and the influence of the C-mannosylation on the function of several proteins. Highly recommended to everyone in this type of post-translational modification!
tinyurl.com/3ywjmfk3
#glycotime
Any chance you can tell me what the major class of N-glycan you find on these guys is?
Very cool, we had a discussion in the lab once about whether you could get sialic acids directly on proteins, and knew of these structurally specific examples. Fascinating to see it is more widespread. How about clusters of pseudaminic acids on a peptide, do they exist/can you detect them?
That’s super insightful, thanks for taking the time to reply!
And then if we want new drugs to our new molecular targets, how would those trials even work with so few or disparately assembled patient groups?
On a similar note, I wonder about precision medicine — we can get to great molecular detail on diagnosis, but do we even have 10% of the capacity when it comes to tailoring treatments? We’ll have to be really lucky if drug repurposing gets us over the line?
We run a “medical data understanding” course for med students, and I am trying to understand the positive revolutionary case for data in medicine. However, while reading up, the intervention gap really stuck out, largely unaddressed. The putative benefits in diagnosis/efficiency I can buy though.
I am heartened by the recent Waymo incident (with a child in a school area) that showed how good the safety of these cars can be. Hopefully there are also people working on these hard problems in science, rather than a race to the bottom that rejects the enlightenment, embracing the word of AI gods.
It perhaps help to think of this period like the transition to self-driving cars. It is full of danger as people have false confidence in the technology, the failure of which has serious externalities in the impact on other people. But there is a golden path to get this right…
So far I haven’t seen the pushback against the other more “serious” whitepapers/ads from the people making scientific reasoning/experimental planning models. It is clear it is a rephrasing of existing literature, but no-one seems to mention that and it only encourages the generation of more BS.
But prototyping ideas, iterating fast and exploring (I mean seriously, who wants to try keeping up with the shifting sands of Rlang third party library methods) are great for vibe-coding - but you’re gonna need to do the hard validation at some point of time, so you better understand everything.
However, the danger is that people already treat biostatistics as a black box, and just want a p-value out. The LLM will not help people develop the mental model for the data, and then you end up introducing two problems to solve: is the method appropriate, and is the code correct?
I am a fan of judicious vibe-coding, but it requires training in a methodology to evaluate results. E.g., I pointed a wet lab PhD student to a LLM to code an excel formula for decoding mass spec composition strings. I think this is OK where you have an orthogonal method to validate your results.