@lawlessopt.bsky.social and I are excited to present our #AAAI2026 tutorial on βLLMs for Optimization: Modeling, Solving, and Validating with Generative AI.β
When: Tuesday, Jan 20, 2026, 8:30amβ12:30pm SGT
Where: Garnet 216 (Singapore EXPO)
(Connorβs intro slides are shown here.)
CC @aaai.org
Posts by Connor Lawless
Thank you!!
It's been an absolute pleasure working with Ellen, Madeleine, and their amazing PhD students for the past year on making optimization more accessible with generative AI!
I am on the job market this year - check out my website (conlaw.github.io) for more details on what I've been up to.
In our final session for the day, we're focused on a hot topic: machine learning and mixed integer programming. Connor Lawless will start the session and tells us how to use LLMs for cold-start cutting plane separator configuration.
doi.org/10.1007/978-...
π₯ New workshop at @neuripsconf.bsky.social!
DiffCoALG bridges the gap between classic algorithms & differentiable learning.
Think: LLM reasoning, routing, SAT, MIP β neurally optimized.
π Submit by Aug 22! π€π§
π sites.google.com/view/diffcoa...
#NeurIPS2025
π: EquivaMap: Leveraging LLMs for Automatice Equivalence Checking of Optimization formulations
(Joint work with @ellen-v.bsky.social @hzhai.bsky.social and @leqiliu.bsky.social )
π: arxiv.org/abs/2502.14760
There's been a lot of work using LLMs to formulate MILPs, but how do we know that the formulations are correct?
Come chat with Haotian at poster W-515 to learn about our work on automatic equivalence checking for optimization models!
Our empirical results highlight that existing pointwise approaches for recourse can fail to catch potential fixed predictions, whereas our approach (provably) succeeds!
We model the problem as a mixed-integer quadratically constrained program that runs in seconds on real-world datasets.
This paradigm lets us spot fixed predictions before deploying a model, lets us audit public models for recourse (even if we don't have any available data!), and gives interpretable summaries of regions with fixed predictions to help with debugging.
In this paper, we introduce a new paradigm for algorithmic recourse that aims to certify recourse over an entire region of the feature space!
Existing approaches to algorithmic recourse focus on verifying recourse on an individual-by-individual basis, which can cause model developers to miss potential fixed predictions, requires a lot of data, and makes it difficult to debug recourse issues!
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!
π Wed 16 Jul 4:30 p.m. PDT β 7 p.m. PDT
πEast Exhibition Hall A-B #E-1104
π arxiv.org/abs/2502.16380
Our β¨spotlight paperβ¨ "Primal-Dual Neural Algorithmic Reasoning" is coming to #ICML2025!
We bring Neural Algorithmic Reasoning (NAR) to the NP-hard frontier π₯
π Poster session: Tuesday 11:00β13:30
π East Exhibition Hall A-B, # E-3003
π openreview.net/pdf?id=iBpkz...
π§΅
This is my first time at an HCI conference - come say hi if you're around!
In addition to a bunch of quantitative experiments, we ran a user study with a prototype system to inform design recommendations for future interactive optimization systems. Check out the paper for more details!
We built a hybrid LLM and CP system that uses LLMs to translate user requests in chat into operations on an underlying CP optimization model to schedule a new meeting. This gets the best of both worlds - the flexibility of LLMs with the decision making power of optimization!
Building optimization models in practice involves a ton of back and forth between optimization and domain experts to understand a decision making problem. Can we enable domain experts to craft their own optimization models instead? We study this through the lens of scheduling.
Excited to be chatting about our ACM TIIS paper at IUI today:
"I Want it That Way": Enabling Interactive Decision Support via Large Language Models and Constraint Programming
π: arxiv.org/abs/2312.06908
In case youre wondering why this thread looks suspiciously like a bunch of screenshots from a presentation...
I'll be chatting about this project at the INFORMs Computing Society Conference in the debate room at 3. Come say hi!
More broadly, this is a first step towards a new paradigm where we can exploit natural language information to do better algorithm configuration and design! There's tons of exciting open problems towards this goal (reach out if you're interested!).
Surprisingly, we can get high performing configurations from our framework - outperforming solver defaults on a number of real world problems, without solving a single MILP!
We introduce a LLM based framework with some algorithmic bells and whistles (ensembling, solver specific context...) to capitalize on LLM strengths while addressing these challenges.
Unfortunately, LLMs aren't a natural fit for configuration. Parameters are problem specific, LLMs have stochastic outputs, and frankly - it's a tough problem!
Can we get better problem-specific solver configurations without the big computational price tag?
In this paper we show that we can thanks to Large Language Models! Why LLMs? They can identify useful optimization structure and have a lot of built in math programming knowledge!
MILP solvers ship with a ton of parameters that can have a massive impact on solver performance (over 70% for separator configuration alone!), but are notoriously difficult to set.
Existing approaches for algorithm configuration require solving a ton of MILPs leading to days of compute.
Super excited about this new work with Yingxi Li, Anders Wikun, @ellen-v.bsky.social, and Madeleine Udell forthcoming at CPAIOR2025:
LLMs for Cold-Start Cutting Plane Separator Configuration
π: arxiv.org/abs/2412.12038
For decades, the US government has painstakingly kept American science #1 globallyβand every facet of American life has improved because of it. The internet? Flu shot? Ozempic? All grew out of federally-funded research. Now all that's being dismantled. 1/ www.technologyreview.com/2025/02/21/1...
"Not a step back"
Possibly --- even a step _forward_?
/s