Advertisement ยท 728 ร— 90

Posts by

Post image

Not sure what you are looking for exactly but the prompt, LLM model, and luck of the draw (regenerating from the same prompt) matter a lot. I took a swing, is this more along the lines of what you wanted/were expecting?

1 week ago 1 0 1 0

Dont we all ๐Ÿฅฒ
assuming ddMS2 id and MS1 quant, you could trim down the topN for the quant runs and using a poor mans acquireX inclusion for higher topN id runs. MSDIAL or DIY MBR id mapping. +2-3x scan cycle rate if ur willing to take the added complexity and MBR assumption

1 week ago 2 0 0 0

Amazing. Mind if i ask omics or targeted? Flow rate and gradient length?

1 week ago 0 0 1 0
Preview
a cartoon character named ralph from the simpsons is taking a pill ALT: a cartoon character named ralph from the simpsons is taking a pill

There's no going back

2 weeks ago 1 1 0 0

i have a couple unsolicited tips here: 1) use rasterized rather than vectorized graphics bc LLM image recognition is better on pixels and vectors eg PNGs = rasterized-PDFs > PDFs, 2) spectrograms are nice for viewing rt-mass ladders ie mz vs rt scatterplot with dots colored by intensity

2 weeks ago 0 0 1 0

just sprinkle in some grammatic, spellling erros for good measure !?!

3 weeks ago 1 0 1 0

1. Raising my children in a chaotic world
2. Impending global war
3. Isomeric analytes

1 month ago 2 0 0 0
Advertisement
Preview
Andrej Karpathy on X: "I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the https://t.co/3tyOq2P9c6" / X I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the https://t.co/3tyOq2P9c6

Similarly fed claude code Thermo meth files and seems it was able to understand/modify. Would be amazing to see it set up a script to automatically update .meth files based on results from acquired data in loop eg gradient/parameter optimization across runs. Analogous: x.com/i/status/203...

1 month ago 2 0 2 0

potential hopium here but I imagine with identification of the proper normalizing signatures can correct most of the regular artifactual effects e.g. oxidation, hydrolysis, hematocrit as major protein/PTM quant principal components. lots of untapped potential bioinfo.

1 month ago 0 0 0 0

imho (dried) whole blood is underrated. as review mentioned has tons of bioinformation, but there's also oft neglected noncatalytic adduct PTMs such as mycotoxin alfatoxin-Albumin and possibly methylmercury-Hemoglobin.

1 month ago 2 0 1 0

You had it work directly with the method files? I suppose it should be able to but I've been paranoid about accidental bricking.

1 month ago 0 0 1 0
Post image

Not to be an Anthropic fanbor but Claude in the substantial lead with respect to recognizing nonsense github.com/petergpt/bul...

1 month ago 1 0 1 0

Oops I misnamed that part, actually outer mz's are trimmed (mz's outside of cutoff percentiles are excluded), not winsorized (mz's outside of percentiles are replaced with cutoff value)

1 month ago 0 0 0 0

Basically takes all precursor rt-mz's, deduplicates and winsorizes, and then breaks them down into evenly sized quantiles based on the number of desired windows in mz (variable) and rt (segments). The boundaries are optimized for z=2&3. By default staggering which helps with MS2 deconvolution.

1 month ago 0 0 1 0
DIA Window Optimizer

Want better DIA #proteomics coverage at the same scan rate with almost zero effort? Get the variable and/or RT-segmented window mz's by uploading your precursor rt-mz table to this free & easy webapp! unitsaq.shinyapps.io/diavariablew...

Idea to webapp in 2 days with #ClaudeCode!

1 month ago 4 1 1 0
Post image

Haha true re: vendor poster but seems others have reproduced decent results at least down to ~5 min www.nature.com/articles/s41... 2 min runs seems like pushing the boundary unnecessarily unless you had to run thousands of samples...

2 months ago 1 0 0 0
Post image

4 min grad (300 SPD) doesn't look too bad either www.biorxiv.org/content/10.1...

2 months ago 0 0 0 0
Advertisement
Post image

Thermo Poster: Not 2 min but 8 min showed no apparent decline in quant accuracy vs 24 min.

If 15 min is about 6 sec peaks, 2 min grad ~1 sec wide lol--can't imagine that's good but maybe the power of averaging multiple peptides per protein partially rescues the pep level degradation

2 months ago 2 0 1 0

Yes, but it's agentic engineering now

2 months ago 1 0 0 0

Yes though my feeling is that pdf vectorized graphics are processed as code so probably better to render as png for image recognition.

2 months ago 0 0 0 0

The unlock with CC is the iterative self-regulating feedback loop. It does much better when it can run and check its own performance and self adjust rather than flying blind as with pure LLM code generation.

2 months ago 1 0 0 0

CC runs R itself from CLI
terminal
cd to project folder
claude
/plan mode
Please write an R script to extract and plot MS1 EICs for the peptides in the .csv from .raw files. Use to /path/to/R and msconvert. Please test in small chunks and review the results/plots before progressing and scaling

2 months ago 0 0 1 0

Still misses so needs careful guidance but less so if the scope is narrow and the task objective is well-defined.

2 months ago 1 0 1 0

Make sure to /plan well (agentic analogue to prompting), have it output relevant metrics and PNG visualizations for feedback, and to keep track of % context fill. Eats tokens like a firehouse so subscription is probably necessary to minimize costs.

2 months ago 1 0 1 0
Advertisement

Imo the workflow for CC is a bit different eg initialize it in a project folder with starting inputs and it writes, runs, and troubleshoots the analysis itself iteratively. Once it's done to your satisfaction you can take the handoff code. More like an assistant developer. No copy pasting needed.

2 months ago 1 0 1 0

A mini success story: Cropping RT eg 1-60 vs 0-60 min for DIA mzML breaks DIA-NN. It was a re-indexing issue which is simple but would take me several manual iterations. With CC in its own local write-run-check iteration loop, it solved overnight with a one-shot prompt. Needs tokens though.

2 months ago 2 0 1 0

Still learning/playing but seems like at least with CC there's a step increase in "intelligence". Importantly having CC write and run the code locally, review the outputs and self-correct removes a ton of friction. Still needs careful high level planning and design direction though.

3 months ago 1 0 0 0

Have you tested antigravity vs claudecode? From my use in the last week claudecode with opus 4.5 is way more capable than in the past--able to one-shot simple data processing ideas consistently but complex ideas are hit or miss especially when the context fills up.

3 months ago 1 0 1 0

I think just adjusting the specificity of the language to match the scope of what they are describing would suffice, eg iTDP refers to a range of approaches, whereas 2DGE-TDP or analogous refers to specific approaches. If this was the original intention then the wiki language does not convey it.

4 months ago 1 0 0 0

Passerby's perspective: seems iTDP is used specifically to refer to 2DGE but semantically could refer to all combinations of techniques eg 1/2/n-D GE/LC + native/denatured/digested MS. Appreciate increased article depth, but visceral reaction is likely bc of specific use of a nonspecific new term.

4 months ago 1 0 1 0