Advertisement · 728 × 90

Posts by Ellen Vitercik

The main conceptual contribution is a way to sidestep the Ω(log n) barrier introduced by standard probabilistic metric embeddings. Instead, Yingxi & Mingwei found a clever way to bound our algorithm’s cost directly on a deterministic embedding & compare it to OPT, bounded via majorization arguments.

2 months ago 1 0 0 0

We:
• Move 𝗯𝗲𝘆𝗼𝗻𝗱 the standard 𝗶.𝗶.𝗱. model: each request comes from its own distribution with a mild smoothness condition.
• Require 𝗻𝗼 𝗱𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻𝗮𝗹 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲: we use only one sample from each request distribution.
• Achieve an 𝗢(𝟭) competitive ratio for d-dimensional Euclidean metrics for d > 2.

2 months ago 0 0 1 0

We study a classic online metric matching problem in which n servers (e.g., rideshare drivers) are available in advance and n requests (e.g., riders) arrive one by one. Each request must be immediately matched to an available server, paying the distance between the two in an underlying metric.

2 months ago 0 0 1 0
Preview
Smoothed Analysis of Online Metric Matching with a Single Sample: Beyond Metric Distortion In the online metric matching problem, $n$ servers and $n$ requests lie in a metric space. Servers are available upfront, and requests arrive sequentially. An arriving request must be matched immediat...

arXiv: arxiv.org/abs/2510.20288

2 months ago 0 0 1 0
ITCS 2026 - Smoothed Analysis of Online Metric Matching with a Single Sample
ITCS 2026 - Smoothed Analysis of Online Metric Matching with a Single Sample YouTube video by Mingwei Yang

This week at the Innovations in Theoretical Computer Science (ITCS) conference, Mingwei Yang is presenting our paper:
𝗦𝗺𝗼𝗼𝘁𝗵𝗲𝗱 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗼𝗳 𝗢𝗻𝗹𝗶𝗻𝗲 𝗠𝗲𝘁𝗿𝗶𝗰 𝗠𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗮 𝗦𝗶𝗻𝗴𝗹𝗲 𝗦𝗮𝗺𝗽𝗹𝗲: 𝗕𝗲𝘆𝗼𝗻𝗱 𝗠𝗲𝘁𝗿𝗶𝗰 𝗗𝗶𝘀𝘁𝗼𝗿𝘁𝗶𝗼𝗻
by Yingxi Li, myself, and Mingwei Yang
See Mingwei's talk here: youtu.be/yEBPI9c7OE8?...

2 months ago 6 0 1 0
LLMs for Optimization Tutorial Fair Clustering Tutorial

Tutorial page (agenda + reading list): conlaw.github.io/llm_opt_tuto...

Thanks to Léonard Boussioux and Madeleine Udell for helping put the proposal together.

3 months ago 0 0 0 0

Optimization is central to planning, scheduling, and decision-making, but deploying solvers requires deep expertise. Our tutorial covers how LLMs can support the end-to-end optimization pipeline (model formulation, solver configuration, and model validation) and highlights open research directions.

3 months ago 1 0 1 0
Post image Post image Post image Post image

@lawlessopt.bsky.social and I are excited to present our #AAAI2026 tutorial on “LLMs for Optimization: Modeling, Solving, and Validating with Generative AI.”

When: Tuesday, Jan 20, 2026, 8:30am–12:30pm SGT
Where: Garnet 216 (Singapore EXPO)

(Connor’s intro slides are shown here.)
CC @aaai.org

3 months ago 8 1 1 0
Advertisement
Post image

Topic 4: Theoretical Guarantees

- Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods (Caramanis et al., NeurIPS’23)
- Approximation Algorithms for Combinatorial Optimization with Predictions (Antoniadis et al., ICLR’25)

4 months ago 1 0 0 0
Post image

Topic 3: Math Optimization

- OptiMUS-0.3: Using LLMs to Model and Solve Optimization Problems at Scale (AhmadiTeshnizi et al., arXiv’25)
- Contrastive Predict-and-Search for Mixed Integer Linear Programs (Huang et al., ICML’24)
- Differentiable Integer Linear Programming (Geng et al., ICLR’25)

4 months ago 2 0 1 0
Post image

Topic 2: Graph Neural Networks

- One Model, Any CSP: GNNs as Fast Global Search Heuristics for Constraint Satisfaction (Tönshoff et al., IJCAI’23)
- Dual Algorithmic Reasoning (Numeroso et al., ICLR’23)
- DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization (Sun & Yang, NeurIPS’23)

4 months ago 2 0 1 0
Post image

Topic 1: Transformers & LLMs

- What Learning Algorithm is In-Context Learning? (Akyürek et al., ICLR’23)
- Transformers as Statisticians (Bai et al., NeurIPS’23)
- We Need An Algorithmic Understanding of Generative AI (Eberle et al., ICML’25)
- Evolution of Heuristics (Liu et al., ICML’24)

4 months ago 3 0 1 0
Post image

I’m excited to share the materials from my Stanford seminar course, “AI for Algorithmic Reasoning and Optimization”: vitercik.github.io/ai4algs_25/. It covered formal algorithmic frameworks for analyzing LLM reasoning, GNNs for combinatorial/mathematical optimization, and theoretical guarantees.

4 months ago 4 2 1 0

On top of his research, my PhD students and I can attest that he’s a thoughtful, generous collaborator and mentor.

Please don’t hesitate to reach out if you’d like me to share my very strong recommendation letter.

(Photo credit: @cpaior.bsky.social.)

5 months ago 2 0 0 0
Preview
OptiMUS-0.3: Using Large Language Models to Model and Solve Optimization Problems at Scale Optimization problems are pervasive in sectors from manufacturing and distribution to healthcare. However, most such problems are still solved heuristically by hand rather than optimally by state-of-t...

Connor has done exciting work on leveraging LLMs to model and solve large-scale optimization problems (arxiv.org/abs/2407.19633, arxiv.org/abs/2412.12038), developing mathematical optimization tools to make ML models more interpretable (arxiv.org/abs/2502.16380), among many other contributions.

5 months ago 1 0 1 0
Post image

Please keep an eye out for Connor Lawless (@lawlessopt.bsky.social) on the faculty job market! Connor is a Stanford Human-Centered AI Postdoc, co-hosted by myself and Madeleine Udell. His research combines ML, computational optimization, and HCI, with the goal of building human-centered AI systems.

5 months ago 6 1 1 1
Advertisement
Preview
Understanding Fixed Predictions via Confined Regions Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Existing approaches to audit fixed predictions do so on a pointwise basis, which requires ac...

Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!

🕐 Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
📍East Exhibition Hall A-B #E-1104
🔗 arxiv.org/abs/2502.16380

9 months ago 5 3 1 0
Post image

Our ✨spotlight paper✨ "Primal-Dual Neural Algorithmic Reasoning" is coming to #ICML2025!

We bring Neural Algorithmic Reasoning (NAR) to the NP-hard frontier 💥

🗓 Poster session: Tuesday 11:00–13:30
📍 East Exhibition Hall A-B, # E-3003
🔗 openreview.net/pdf?id=iBpkz...

🧵

9 months ago 6 2 1 0

Join us for a Wikipedia edit-a-thon at #ACMEC25!
When: July 8th, 8PM-10PM
Where: Stanford Econ Landau 139
Website: sites.google.com/view/econcs-...

Come hangout, grab snacks, and edit/create Wikipedia pages for EC topics.

Suggest topics/articles that need attention: docs.google.com/spreadsheets...

9 months ago 12 3 1 0

Congrats Kira!!

1 year ago 0 0 0 0
Preview
LLMs for Cold-Start Cutting Plane Separator Configuration Mixed integer linear programming (MILP) solvers ship with a staggering number of parameters that are challenging to select a priori for all but expert optimization users, but can have an outsized impa...

Super excited about this new work with Yingxi Li, Anders Wikun, @ellen-v.bsky.social, and Madeleine Udell forthcoming at CPAIOR2025:

LLMs for Cold-Start Cutting Plane Separator Configuration

🔗: arxiv.org/abs/2412.12038

1 year ago 11 5 1 0
Post image

Pulled a shoulder muscle trying to stay cool on the golf course in front of my PhD students and postdoc 😅 🏌‍♀️

1 year ago 18 0 0 0
Post image

📢 Join us at #NeurIPS2024 for an in-person Learning Theory Alliance mentorship event!
📅 When: Thurs, Dec 12 | 7:30-9:30 PM PST
🔥 What: Fireside chat w/ Misha Belkin (UCSD) on Learning Theory Research in the Era of LLMs, + mentoring tables w/ amazing mentors.
Don’t miss it if you’re at NeurIPS!

1 year ago 9 2 0 0

Hi Emily, could you please add me? Thanks for making it!

1 year ago 4 0 1 0

Can you add me? 😀

1 year ago 0 0 1 0