Advertisement · 728 × 90

Posts by Felipe Vecchietti

Post image

Already home in Rio for ICLR!

Spending my first birthday at home after a decade living abroad.

After a year that has been very challenging to find motivation for research, this make me value the simple things so much.

Ready to see you all in Rio!

2 days ago 2 0 0 0
Preview
GenBio ICML Workshop 2026 The 2026 workshop on Generative and Agentic AI for Biology at ICML 2026, Seoul, South Korea.

Consider submitting your work to the GenBio workshop at ICML 2026!

We have an amazing set of speakers whose work spans generative and agentic AI.

Deadline: May 1, 2026

genbio-workshop.github.io/2026/

5 days ago 4 1 0 0

I will be attending ICLR in Rio in 2 weeks! If you are interested in reinforcement learning, multi-agent systems, robotics, and generative AI for protein/antibody design, I would be happy to have a chat!

Also, I am from Rio, so if you need help planning your trip, feel free to reach out!

1 week ago 4 1 1 0

I will be attending ICLR in Rio in 2 weeks! If you are interested in reinforcement learning, multi-agent systems, robotics, and generative AI for protein/antibody design, I would be happy to have a chat!

Also, I am from Rio, so if you need help planning your trip, feel free to reach out!

1 week ago 4 1 1 0

In the paper we define these methods as interpretable methods rather than explanable methods per-se because some are related to confidence metrics rather than explanations. But, for our decision tree or geometric deep learning examples I think it can also be classified as XAI.

2 weeks ago 1 0 0 0

Hi Conor, thanks for the interest in our work!

For proteins, we can validate the faithfulness of these interpretable metrics that we mention in the paper with experimental lab data. Still, generalization to new data is very tricky because it requires more lab experiments.

2 weeks ago 1 0 1 0

A special thanks to all my co-authors!

Minji Lee, Begench Hangeldiyev, Bryan Wijaya, Hyunkyu Jung, Dr. Hahnbeom Park, Prof. Tae-Kyun Kim, Prof. Mia Cha, and Prof. Ho Min Kim

1 month ago 0 0 0 0

In the paper, we emphasize the significance of interpreting and visualizing ML-based inference on structure-based protein representations to enhance knowledge discovery.

We hope the manuscript helps newcomers to the field and leads to novel interpretable approaches!

1 month ago 0 0 1 0

Another big source of inspiration was ColabFold by @sokrypton.org and all the advancements that were possible by the amazing community built around it!

1 month ago 0 0 1 0

This paper started from an essay that I wrote 4 years ago when I was starting to work with Protein + AI. At that moment, I was fascinated by the prediction heads of the AF2 architecture and how confidence metrics like pLDDT and pAE could be applied to filter designs.

1 month ago 0 0 1 0
Advertisement
Preview
Interpretable Machine Learning for Protein Science: Structure, Function, and Interactions | ACM Computing Surveys Recent advancements in machine learning (ML) are transforming the field of structural biology. For example, AlphaFold, a groundbreaking neural network for protein structure prediction, has been widely adopted by researchers. The availability of easy-to-...

I am happy to share that our paper “Interpretable Machine Learning for Protein Science: Structure, Function, and Interactions” is out at ACM Computing Surveys!

dl.acm.org/doi/10.1145/...

1 month ago 3 1 2 0
Preview
Softly, effectively, in the age of AI On dolphins and disobedience

I wrote about what it means to keep writing when more language feels like the last thing we need--as a computer scientist, but also as a writer.

2 months ago 39 6 3 2
Post image

This reddit post made me laugh

2 months ago 19 4 0 0

It is a great book. The favorite book that I read last year!

2 months ago 1 0 0 0

This was fun work and a remarkable effort across the computational and wet-lab teams!

Strategies for in-silico filtering and ranking of antibody designs have been under-discussed in the literature, e.g. in most technical reports on antibody design that I've seen. Let's talk about these here! [1/n]

3 months ago 18 8 1 0
Table 1: Typology of traps, what goes wrong if not avoided, and how the traps can be avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.

Table 1: Typology of traps, what goes wrong if not avoided, and how the traps can be avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.

We present a typology of traps to avoid:

1. Believing that AI systems are minds

2. Believing that AI systems are theories

3. Believing that cognitive science can be automated.

Learn to recognise and avoid these traps. Failure to avoid leads to numerous problems.

3/🧵

3 months ago 235 70 9 2

So 2025 turned out to be a big year for de novo antibody design! Here are thoughts and predictions on the state of de novo antibody design heading into 2026 🧵

3 months ago 43 16 1 1
Advertisement

I don't know exactly the protocol but they will likely guide you on how to do it. I remember my old institution helped guiding the update of the PI's pages and other PR matters.

3 months ago 0 0 0 0

I think you should contact the PR of your institution. They should have guidelines for that.

3 months ago 0 0 1 0
RLJ | RLC Call for Papers

Hi RL Enthusiasts!

RLC is coming to Montreal, Quebec, in the summer: Aug 16–19, 2026!

Call for Papers is up now:
Abstract: Mar 1 (AOE)
Submission: Mar 5 (AOE)

Excited to see what you’ve been up to - Submit your best work!
rl-conference.cc/callforpaper...

Please share widely!

3 months ago 62 29 1 9

As a bonus, here's a video of ProteinEBM folding up the fast-folder NTL9, rendered in stunning 2D by py2Dmol from @sokrypton.org! We hope models like ProteinEBM can serve as a step toward solving the "real" protein folding problem.

4 months ago 36 5 1 0

The US social media vetting for visas will be devastating for scientific and journalistic conferences, fellowships etc. No global organisation can seriously consider holding an international conference in the US while this policy exists.

4 months ago 62 20 2 1

How it started (quote post)
How it's going (linked article)

www.theguardian.com/technology/2...

4 months ago 306 93 4 6
Comment by Tom Diettrich on a linkedin post reading:

"You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. 

Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. 

In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea."

Original:
https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)

Comment by Tom Diettrich on a linkedin post reading: "You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea." Original: https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)

Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.

>>

6 months ago 495 105 20 16
Post image

🎤 Announcing the 3rd workshop on Reinforcement Learning in Mannheim 🎤

We have an amazing lineup of speakers: @Mathieugeist, @gio_ramponi, Theresa Eimer, @SarahKeren_, @araffin2, @c_rothkopf, and @AdrienBolland

⏰ Friday 6th February
📍University of Mannheim

4 months ago 22 10 1 1

Fun! Tldr
AI researchers are pissed bc some AI research papers submitted to an AI conference by AI researcher colleagues is AI-written & many are AI-reviewed as found by an AI company's AI model, described in a paper for said AI conference. Said paper was also AI-reviewed (but deffo not AI-written)

4 months ago 6 2 1 0
Page 10-11 of the linked PDF

Page 10-11 of the linked PDF

On AI’s ‘mediocrity trap’ — experiments indicates that while AI helps the less skilled make something passable, the highly skilled don’t use it to produce something better than they could have; they produce something ok, but lose motivation to make it great. www.jin-li.org/uploads/1/1/...

4 months ago 189 69 2 6
Advertisement

everyone here is extremely confident they’ve never used AI but an enormous fraction of their existence is mediated, observed, or recommended by AI models over the last decade+. it’s just not advertised as a Product Name, so it doesn’t exist

5 months ago 135 18 6 1

This is a very good clip of Guillermo del Toro on the value of art. The value of human-made art is, in part, based on what the artist experienced that made them want to make the art. That is what we pay for. That is what AI does not have.

5 months ago 69 15 3 6
Search Jobs | Microsoft Careers

Are you a PhD student interested in ML and biology or health? Come do an internship with me, @avapamini.bsky.social, Alex Lu, @lcrawford.bsky.social, or Kristen Severson at MSRNE!

Applications are due Dec 1: make sure you include a research statement!

jobs.careers.microsoft.com/global/en/jo...

6 months ago 18 9 0 2