Posts by Thiago Serra
Tomorrow I will be presenting at the EURO Online Seminar Series on Operational Research and Machine Learning.
This live webinar will be at 9:30 Iowa / 10:30 New York / 11:30 São Paulo / 15:30 Lisbon / 16:30 Central European time.
Likewise. Also, thanks for helping me finally meeting @senthilv.bsky.social in person!
Had a lovely time back in Pittsburgh by invitation of Vrishabh Patil.
I meant to take a selfie with Vrishabh, Macarena, and Sebastián, but I forgot! So here is a selfie of my first encounter with Scotty’s statue, which didn’t exist when I was a student at Carnegie Mellon.
The recording of my talk at University of Wisconsin-Madison is now online: m.youtube.com/watch?v=zL3n...
Thanks again @optimizer.bsky.social, @jefflinderoth.bsky.social, @lauraalbertphd.bsky.social, @madsjw.bsky.social, and all other Badgers!
13 PhD positions in in applied #maths, #orms, engineering, or CS within the ALMOA consortium
-> study applications of European and international relevance in areas such as sustainable energy systems, green gogistics, etc.
almoa.aau.at?page_id=53
This year’s event is being coordinated by Thiago Serra (thiago-serra@uiowa.edu), Sam Burer (samuel-burer@uiowa.edu), and Ann Campbell (ann-campbell@uiowa.edu). Email us if you have any questions.
Apply now!
Oops!
The workshop will be held in Iowa City from August 16 (opening reception) – August 18 (departure day). Participants will have opportunities to learn more about research & teaching in business analytics departments, practice networking with business analytics faculty & get expert job search advice.
Students can now apply online via the website:
tippie.uiowa.edu/news-events/...
All accepted applicants will attend for FREE including air travel. The only restriction is that students are at a U.S. institution. Applications are due April 30.
We are now accepting applications for the 5th FutureBAProf workshop at the University of Iowa for advanced PhD students and postdocs. This is the one and only workshop that demystifies academic careers in business analytics and helps participants prepare for the job market and careers in this field.
[…] seeing some familiar faces again as well as meeting new ones, and having some fun times with my son in and out of the conference!
Heading back home after an amazing INFORMS Optimization Society Conference, seeing Jiaxiao Fang give her first talk, talking about Madeline Colbert’s work, catching up with my former Bucknell students Changkun Guan and Tsugunobu Miyake, […]
This problem is formulated as a 3-stage schocastic optimization model involving network design, routing, and allocation; which is tackled through decision-based scenario clustering.
2/2
Rosemarie Santa González talked at the INFORMS Optimization Society Conference about optimizing the cold food supply chain of an indigenous nation in Northern United States.
1/2
Jeff Decary talked at the INFORMS Optimization Society Conference about solving portfolio optimization problems with binary options and combinatorial options using logic-based Benders decomposition.
Federico Bobbio talked at the INFORMS Optimization Society Conference about an incentive-compatible mechanism without monetary transfers for coordinating how companies and scientists can share spectrum for satellite communications, which corresponds to solving a continuous knapsack problem.
As an alternative to using multiple alternates to reduce the dimension of the feasible set of the inverse problem, Ghobadi proposes working with a noisy alternate to be projected to a face of the feasible set of the direct LP problem. She calls this optimization approach as inverse learning.
2/2
Kimia Ghobadi talked about pragmatical aspects of using inverse optimization at the INFORMS Optimization Society Conference.
Ghobadi started by discussing the non-uniqueness of inverse solutions due to, for example, using a single vertex as the alternate in linear programming problems.
1/2
It is this kind of counterintuitive finding that makes inverse optimization so interesting! 😎
4/4
However, the parallel with direct optimization stops there: the formulation of the feasible set of the forward problem being perfect/ideal does not imply inverse integrality, and there are cases in which we can get inverse integrality even if the forward problem does not have that property.
3/4
Ley showed that we can get integrality for free if the constraint matrix is totally unimodular.
2/4
Eva Ley talked at the INFORMS Optimization Society Conference about solving integer inverse optimization problems, meaning that we want to find integer objective coefficients that would make the alternate solution optimal.
1/4
Very interesting talk by Sam Garvin at the INFORMS Optimization Society Conference about solving inverse mixed-integer optimization problems by alternating between (1) generating local Chvátal cuts with the constraints that are active at the alternate solution & (2) generating the Chvátal closure.
Today at the INFORMS Optimization Society Conference there will be a memetic tribute to the late Chuck Norris at the Constraint Learning session, which starts at 4:30 in the Cabinet Room.
PS: I am not telling in which talk this is happening. You will have to stay for the whole session to see it. 🙈
These eigenvectors are used to construct a sparse weighted average over a subset of vertices to match the global average for all low‑frequency components. Such a weighted subset of vertices is called a graphical design.
The rest of the talk presented theoretical results on graphical design.
3/3
For a given graph, we can use the main eigenvectors of the random-walk transition matrix obtained as a product of the adjacency matrix by the inverse of the diagonal degree matrix to extract a low‑frequency approximation of the graph structure.
2/3
Rekha Thomas talked about sampling to compute the average value of a function over a graph at the first keynote talk of the INFORMS Optimization Society Conference.
She started by exemplifying the same problem in simpler settings, such as line segments and spheres.
1/3
There is an interesting presentation of training time improvements: from 4587s using CPU at first, using Implicit Function Theorem for backward gradient pushed it down to 640s, switching to a GPU pushed to 158s, and exploiting GPU parallelism by increasing batch size pushed it to 18.5s!
3/3