I really wish there was one for tech-bio. If you don't want to run one yourself, would you mind sharing more details to help others do it? 🙏
Posts by Artur Szałata
Ladies in the background don't have ducks 😢
Oh wow, Anthropic accidentally leaked Claude Code and it’s been cloned and made public github.com/instructkr/c...
I love this. And right now atproto is the only place you could really scale up these changes
Warsaw University Library. Has an awesome rooftop garden that's perfect for a walk while taking a break. Good napping spots and food options nearby - what else can a student dream of?
For a recent lab meeting, I wrote up a grab bag of ways to think about your development as a researcher during a PhD: emerge-lab.github.io/papers/an-un...
Sharing in case folks find it useful or have feedback!
Great collaboration with Tianyu Liu, Wenxin Long, @fabiantheis.bsky.social , Lingzhou Xue, and Hongyu Zhao
We embed chemical and genetic perturbation readouts and use the model for gene-drug interaction prediction, among other tasks.
Paper alert 🚨 @ MLGenX @iclr-conf.bsky.social 2026!
PertOmni - CLIP-style multimodal representation-learning framework for contrastive alignment of perturbation readouts and textual embeddings.
On top of the paper, I had the pleasure of working with the incredible team at BRAID: Jan-Christian Hütter, Vladimir Ermakov, Zoe Piran, and Russell Littman on the Virtual Cell Challenge virtualcellchallenge.org/leaderboard, where our team ranked 13 on the most significant metric, PDS, and 32 overal
This work was done during my internship at BRAID @genentech.bsky.social , together with Alexander Wu, Jan-Christian Hütter, Zoe Piran, Russell Littman, David Richmond, and @fabiantheis.bsky.social
PerturBERT, instead, is a transformer that learns how genes co-vary under perturbation by modeling perturbation signatures. Learned gene embeddings match or outperform scGPT and other baselines despite PerturBERT training on a 30x smaller dataset and using 65x fewer genes.
Paper alert 🚨 @ MLGenX @iclr-conf.bsky.social 2026!
PerturBERT openreview.net/forum?id=ZsD... - Most transformers in single-cell omics (e.g. scGPT) learn gene co-expression through pre-training on masked gene expression levels.
Caris is launching a WGS+ML blood test for early cancer detection. Impressive sensitivity: 56% stage 1 and 70% stage 2 - if it holds, it would blow competitors out of the water
Nano Banana 2 test. Humans are still safe 😌
Inspired by
"Constitution" is a bad name for model preference guidelines doc. It suggests, among others, public legitimacy, where there is none.
Surprisingly, all of the models give me the right answer
Interesting how opus 4.6 extended and gemini 3 pro give me the correct answer here, but not ChatGPT5.2 thinking that I'd trust more with most of my work
Image
Image
Gemini Deep Think 3 is the world's most capable model by many measures, huge amounts of progress on reasoning benchmarks and more.
Available right now via the Gemini App for Ultra subscribers and in the API soon : ) x.com/OfficialLoga...
How to fake a robotics result: a short blog post listing many sins which annoy me (many of which I am guilty of from time to time, to be fair)
open.substack.com/pub/itcanthi...
Haha @cpaxton.bsky.social is on fire:
open.substack.com/pub/itcanthi...
Some useful tips in there even for non-roboticists looking to make their paper artificially look good
e-ink is a lifesaver for anyone with cybersickness
And for a lot of Gen X journalists and academics, the answer to (a) — assuming existing skills and plans — is legit "no." AI can be useful, for sure, but the paths it differentially advantages are not the paths where they have accumulated momentum, expertise, and social capital. +
Big thanks to Olga Novitskaia for spotting the issue!
Link to the broken history mapper: useast.ensembl.org/Homo_sapiens...
A quick story on how we matched genes across two datasets with different Ensembl versions.
1. There must be a tool out there. Ensembl ID History converter ofc!
2. Doesn't match Ensembl search outcomes due to a bug
3. Lesson: use this client instead github.com/Ensembl/ense... !
Also awesome for finding posters at a conference!
Predicting cell state in previously unseen conditions has typically required retraining for each new biological context. Today, Arc is releasing Stack, a foundation model that learns to simulate cell state under novel conditions directly at inference time, no fine-tuning required.
This. You can add a link in a reply to the post and avoid the penalty