Funding agencies should impose application funnels that maintain ~25% success rates (very low rates, with the same pool of money, are only seemingly kind to the applicants and are detrimental to collective productivity)
Posts by Amir Mitchell
IMO, the real cost of low success rates is not the time wasted on reviews but the time researchers collectively waste developing grant proposals that will not be funded. 1/2
It's an important piece (provocative, yes, even triggering). It surfaces one the biggest challenges research universities will soon face. The current incentive model in (US) academia awards progress more than anything which only compounds the problem
Absolutely, what can be more fitting 😉
Our latest paper was just published. Will prepare a full thread very soon 🦠🧪🤖
🎯
A screenshot of our first conversation with an AI bot over slack
I'm developing an #AI bot name HAL to assist with our research. Here's our first conversation with it over Slack (didn't tell the lab before deploying it) 😂
I tip my hat at the authors putting in all the efforts for these follow-up experiments
Just concluded a 2-week coding spree with a student using VS+codex. The x5-10 gain in productivity blew my mind 🤯. BUT, I'm also sure that for inexperienced coders AI-coding makes garbage-in-gargbe-out pitfall inescapable.
Beautiful “full arc” story on abx response with impressive level of mechanistic detail (could be a class example for undergrad microbiology lesson)
Congrats on publishing this beautiful work! I'm still mind-blown that: (1) there's a 7th RND pump in E. coli and (2) that its somewhat unclear what it actually pumps out IRL
It's a beautiful approach! But it's very disappointing that there is actually NO phenotypic characterization of the rearranged strains in the manuscript ... hoping it's an evolving manuscript that they'll keep updating
Totally agree. Fail early, fail often (and aim high) is something more scientists need to live by.
Huge credit to @carmenli.bsky.social for her persistence in chasing a moonlight project into its beautiful completion. Credit also goes Ethan Chang, the rotation student contributing to this work 9/9
Btw, note that “inactivation” can mean more than enzymatic degradation and includes any process that reduces effective drug activity over time (chemical modification, sequestration, etc) 8/9
We then tested if this assosiation holds through our entire dataset. We used a functional assay for drug inactivation on all drugs and found the association holds up. A long-lag inhibition phenotype is a strong indicator of drug inactivation 7/9
That pushed us to ask whether cellular defenses might impact curve profiles. We cloned different resistance cassettes and measured how they altered the potency-matched growth curves. This strongly hinted that active drug inactivation underlies a long-lag inhibition profile 6/9
Overlaying the known mechanisms of action over the barycentric landscape ruled out this effect stem exclusively from how drug target bacteria (since drugs with the same mechanism can land in very different regions of the landscape) 5/9
Clustering drug by their impact on lag/rate/yield clearly revealed that they vary hugely in how they inhibited growth. In extreme cases, a drug solely affected only a single parameter 4/9
To compare drugs fairly, we didn’t use an arbitrary concentration. Instead, we interpolated each drug to a potency-matched condition (the concentration expected to produce the same overall level of inhibition) 3/9
So we assembled a new carefully curated dataset with growth curves across almost forty drugs, measured across multiple sub-inhibitory concentrations. For each curve, we quantified intuitive its key features: lag, growth rate, and yield 2/9
Our recent paper in npj Antimicrobials and Resistance is a great example of scientific serendipity: after staring at thousands of bacterial growth curves over many studies, we started wondering whether the curve shapes themselves carry mechanistic information 1/9 🦠🧪
www.nature.com/articles/s44...
This was a true collaboration between physicists (Andrew Mugler and @motasemelgamel.bsky.social), an immunologist (Michael Brehm from @umasschan.bsky.social ), and systems biologists (with @serkansayin.bsky.social and Brittany Rosener from my lab also at @umasschan.bsky.social ) (7/7)
Beyond providing (to our knowledge) the first dynamical model for tumor colonization, our study matters given the fierce debate on the tumor microbiome. These statistical “fingerprints” may help distinguish genuine colonizers from technical artifacts/contamination (possibly even by microscopy) (6/7)
The surprise: lineage sizes formed a scale-free power law that matches Zipf’s law (rank–frequency slope ~−1). This signature was robust across dozens of tumors and multiple collection days post bacteria intratumor injection (5/7)
When we injected bacteria directly into the tumor (circumventing the bottleneck), we detected thousands of colonizing lineages, yet their sizes were still highly uneven (ruling out early tumor arrivers dominate) (4/7)
Since we used genetically barcoded bacteria, we could also monitor growth of individual colonizers. We found that growth was extremely uneven with a handful of lineages becoming dominant (“winner-takes-most”) (3/7)
Main takeaways: Post systemic infection, there's a tight colonization bottleneck (per-cell colonization probability ~0.005%). Yet, once colonization happens, growth is remarkably fast (~50 min generation time) and bacterial load in tumors approaches saturation within a day (2/7)
We just published in @molsystbiol.org with the Mugler lab (UPitt) on bacterial population dynamics during tumor colonization (mouse model). Our study was guided by a Luria–Delbrück-style idea: infer mechanism from statistics (1/7) 🧪🦠
doi.org/10.1038/s443...