Advertisement · 728 × 90

Posts by Anja Surina

Many thanks to all amazing collaborators that contributed to this project - Amin Mansouri, @lars-quaedvlieg.bsky.social , Amal Seddas, Maryna Viazovska, Emmanuel Abbe, @caglarai.bsky.social

12/12

11 months ago 4 2 1 0

In conclusion, EvoTune is a step toward self‑improving LLMs for algorithm design.

Full paper: arxiv.org/pdf/2504.05108
Website and code: claire-labo.github.io/EvoTune/

11/12

11 months ago 6 1 1 0

☑️ Robust gains: EvoTune also outperforms FunSearch on test sets probing generalization at budgets up to 22K sampled algorithms, more details in the paper. 10/12

11 months ago 3 0 1 0
Post image

🌐 Maintaining diversity: RL fine-tuning risks narrowing output diversity, however, output diversity is crucial for evolutionary search. We incentivize diversity through forward KL regularization, an island-based program database, and more. 9/12

11 months ago 2 0 1 0
Video

📊 Initially, the distributions of program scores of EvoTune and the baseline are similar. However, as the search progresses, EvoTune generates a larger number of high-quality solutions as reflected by the larger increase in the lower optimality gap region. 8/12

11 months ago 3 0 1 0
Post image

We evaluate EvoTune on three combinatorial optimization problems:
📦 Bin packing
🗺️ Traveling salesman
🧩 Flatpack

📈 Key result: Across all problems and three LLMs, EvoTune discovers programs with higher rewards faster compared to the Funsearch (no finetune) baseline 7/12

11 months ago 3 0 1 0

⚙️ Evotune loop:

🧬 Evolve: Sample high-performing algorithms → prompt the LLM to propose better candidates → evaluate and store them.
🧠 Learn: Fine-tune the LLM based on performance rewards from discovered algorithms.

↪️ Repeat.

6/12

11 months ago 2 0 1 0

In this way, the LLM directly learns from the signal generated by evolutionary search - how to identify promising regions of the search space and promote successful strategies. As a result, the method accelerates the discovery of high-performing algorithms. 5/12

11 months ago 2 0 1 0

✨ Our innovation: We propose to augment LLM-based evolutionary search by continuously refining the search operator – the LLM – through RL fine-tuning. 4/12

11 months ago 3 0 1 0

However, current methods treat the LLM as a static generator, overlooking its potential to more directly learn from the signal generated by the search process. 3/12

11 months ago 3 0 1 0
Advertisement

🔎 The challenge: Crafting algorithms traditionally demands extensive human expertise and time. LLM-guided evolutionary search methods like FunSearch have shown impressive results in problems ranging from mathematical discovery to robotics and competitive programming. 2/12

11 months ago 3 0 1 0
Video

Excited to share our latest work on EvoTune, a novel method integrating LLM-guided evolutionary search and reinforcement learning to accelerate the discovery of algorithms! 1/12🧵

11 months ago 21 10 1 2