Old, but it's still amazing how many tokens you can reduce with this technique. medium.com/@parasmadan....
I think it's nice to keep these strategies in mind while there are new concepts every day. I like the combination with #TextGrad for some crazy performance increases #AI #LLM #performance
#textgrad
Here's the non-paywall version of our #TextGrad paper rdcu.be/efRp4! 📜
I had a lot of fun discussing #textgrad on the @nature.com podcast with @climateadam.bsky.social! It starts at around 12 minutes here www.nature.com/articles/d41...
💡The key idea of #textgrad is to optimize by backpropagating textual gradients produced by #LLM
Paper: www.nature.com/articles/s41...
Code: github.com/zou-group/te...
Amazing job by Mert Yuksekgonul leading this project w/ Fede Bianchi, Joseph Boen, Sheng Liu, Pan Lu, Carlos Guestrin, Zhi Huang
⚡️Really thrilled that #textgrad is published in @nature.com today!⚡️
We present a general method for genAI to self-improve via our new *calculus of text*.
We show how this optimizes agents🤖, molecules🧬, code🖥️, treatments💊, non-differentiable systems🤯 + more!