Advertisement · 728 × 90

Posts by Leonardo Cotta

Learning Curves: Asymptotic Values and Rate of Convergence

I'm trying to write about the history of scaling laws, and my go-to reference in the ML community is [1]. If anyone has good suggestions in asymptotic stats, I'm curious to read and help make the connections.
[1] proceedings.neurips.cc/paper_files/...

1 week ago 7 1 0 0

I've finally deleted my twitter account, but as much as I love bksy's idea, it doesn't seem to be a good replacement. In terms of keeping up with science, linkedin and reddit have unbelievably proven more effective for me. Is there anything I'm missing here? A feed to follow, a better way to use it?

1 week ago 1 0 0 0

I can only imagine how crazy it must be to be a PhD student submitting to ML conferences now. The process has always been noisy, but at this point it's selecting for either obfuscation or shallow ideas. You either intimidate the reviewer, or you write a blog post in latex.

5 months ago 2 0 0 0
Post image

We're excited to present our latest article in Nature Machine Intelligence - Boosting the predictive power of protein representations with a corpus of text annotations.

Link: www.nature.com/articles/s42...
[1/4]

7 months ago 12 5 1 0

I’d add data/task understanding as a separate mid layer. Most papers I know break in the transition of high to mid.

8 months ago 1 0 1 0
Milton Nascimento & esperanza spalding: Tiny Desk (Home) Concert
Milton Nascimento & esperanza spalding: Tiny Desk (Home) Concert YouTube video by NPR Music

the goat of brazilian music w/ the best of (current) american music
www.youtube.com/watch?v=jFUh...

8 months ago 2 0 0 0

This is why I personally love TMLR. If it's correct and well-written let's publish. The interesting papers are the ones the community actively recognizes in their work, e.g. citing, extending, turning into products, etc. (independent process of publication).

8 months ago 1 0 0 0
Advertisement

I agree with most of your thread, but classifying "uninteresting work" is quite hard nowadays. Papers became this "hype-seeking" game, where out of the 10 hyped papers of the month, at most 1 survives further investigation of the results. And even if we think we're immune to this, what is interest?

8 months ago 2 0 1 0
Preview
Scaling Laws Are Unreliable for Downstream Tasks: A Reality Check Downstream scaling laws aim to predict task performance at larger scales from pretraining losses at smaller scales. Whether this prediction should be possible is unclear: some works demonstrate that t...

I loved this new preprint by Lourie/Hu/ @kyunghyuncho.bsky.social . If you really wanna convince someone youre training a foundation model, or proposing better methodology, loss scaling laws aren't enough. It has to be tied w/ downstream performance. it shouldn't be vibes
arxiv.org/abs/2507.00885

8 months ago 5 1 0 0

We're at ICML, drop us a line if you're excited about this direction.

📄 Paper: arxiv.org/abs/2507.02083
💻 Code: github.com/h4duan/SciGym
🌍 Website: h4duan.github.io/scigym-bench...
🗂️ Dataset: huggingface.co/datasets/h4d...

9 months ago 1 0 0 0
Post image

I'm very excited about our new work: SciGym. How can we scale scientific agents' evaluation?
TLDR; Systems biologists have spent decades encoding biochemical networks (metabolic pathways, gene regulation, etc.) into machine-runnable systems. We can use these as "dry labs" to test AI agents!

9 months ago 2 0 1 0

Also, I see ITCS more like a “out of the box”, “bold” idea or even new area, I don’t see the papers having simplicity as a goal, but just my experience.

9 months ago 0 0 1 0

Mhm, I agree with the idealistic part, I certainly have seen the same. But I know quite a few papers that are aligned w the call, tbh this happens in any venue. I think the message and the openness to this kind of paper is important though

9 months ago 0 0 2 0

I wish we had an ML equivalent of SOSA (Symposium On Simplicity in Algorithms). "simpler algorithms manifest a better understanding of the problem at hand; they are more likely to be implemented and trusted by practitioners; they are more easily taught" www.siam.org/conferences-....

9 months ago 3 0 1 0

this is not my area, but if you think of it in terms of a randomized algorithm (BPP,PP), the hard part is usually the generation, at least for the algorithms we tend to design. e.g. Schwartz-Zippel Lemma. (Although in theory you can have the "hard part" in verification for any problem)

10 months ago 2 0 1 0

It takes 1 terrible paper for knowledgeable people to stop reading all your papers, this risk is often not accounted for

10 months ago 1 0 1 0
Advertisement

Maybe check Cat s22, it gives you the basics, eg whatsapp+gps and nothing else

10 months ago 2 0 0 0
Preview
Damage and Misrepair Signatures: Compact Representations of Pan-cancer Mutational Processes Mutational signatures of single-base substitutions (SBSs) characterize somatic mutation processes which contribute to cancer development and progression. However, current mutational signatures do not ...

Please check out our new approach to modeling somatic mutation signatures.

DAMUTA has independent Damage and Misrepair signatures whose activities are more interpretable and more predictive of DNA repair defects, than COSMIC SBS signatures 🧬🖥️🧪

www.biorxiv.org/content/10.1...

10 months ago 41 17 0 0

it just sounds like "see you three times" ;) it's like some people named "Sinho" that is often confused with portuguese/brazilians; but from what I heard it's a variation of Singh (not sure though)

10 months ago 1 0 1 0

One simple way to reason about this: treatment assignment guarantees you have the right P(T|X). Self-selection changes P(X), a different quantity. Looking at your IPW estimator you can see that changing P(X) will bias regardless of P(T|X).

11 months ago 3 2 0 0

I haven't been up to date with the model collapse literature, but it's crazy the amount of papers that consider the case where people only reuse data from the model distribution. This never happens, there's always some human curation or conditioning that yields some type of "real-world, new, data".

1 year ago 2 0 0 0

this general idea of using an external world/causal model given by a human and using the LM only for inference is really cool ---it's also the insight behind our work in NATURAL. Do you guys think it's possible to write a more general software for the interface DAG->LLM_inference->estimate?

1 year ago 2 0 1 0
Preview
Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks While machine learning on graphs has demonstrated promise in drug design and molecular property prediction, significant benchmarking challenges hinder its further progress and relevance. Current bench...

This is my favourite "graph paper" of the last 1 or 2 years. We also need to start including non-NN baselines, e.g. fingerprints+catboost ---if the goal is real-world impact and not getting it published asap. I also recommend following @wpwalters.bsky.social's blog.
arxiv.org/abs/2502.14546

1 year ago 6 0 0 0
Post image

Unbelievable news.

Pancreatic is one of the deadliest cancers.

New paper shows personalized mRNA vaccines can induce durable T cells that attack pancreatic cancer, with 75% of patients cancer free at three years—far, far better than standard of care.

www.nature.com/articles/s41...

1 year ago 7251 1919 139 315

Oh gotcha. I think it’s just super cheesy to quote feynman at this point haha but it’s a good philosophy to embrace

1 year ago 0 0 0 0
Advertisement

In what contexts do you think it’s misused? Just curious, I’m a big fan and might be overusing it 😅

1 year ago 0 0 1 0
Preview
The Ultra-Scale Playbook - a Hugging Face Space by nanotron The ultimate guide to training LLM on large GPU Clusters

After 6+ months in the making and over a year of GPU compute, we're excited to release the "Ultra-Scale Playbook": hf.co/spaces/nanot...

A book to learn all about 5D parallelism, ZeRO, CUDA kernels, how/why overlap compute & coms with theory, motivation, interactive plots and 4000+ experiments!

1 year ago 179 52 2 5

if you're feeling uninspired and getting nan's everywhere, you can give your codebase, describe the problem and ask for suggestions to try or debug. I think of it more as a debugger assistant than a code generator.

1 year ago 2 0 0 0

I've always hated the "reasoning models" for code assistance since I think the most useful application of LLMs is really writing the boring helper functions and letting us focus on the hard work. However, I found o3 to be particularly useful when debugging ML code, e.g., 1/2

1 year ago 1 0 1 0
Reconstruction for Powerful Graph Representations

if you remove one at a time you get reconstruction gnns 🙃 proceedings.neurips.cc/paper/2021/h...

1 year ago 1 0 0 0