Advertisement · 728 × 90

Posts by David Bindel

A math degree is also great!

23 hours ago 3 0 0 0
A bush covered with snow in the foreground. In the background, a cherry tree in bloom, with snow dusting the top. Ithaca campus April 20, 2026.

A bush covered with snow in the foreground. In the background, a cherry tree in bloom, with snow dusting the top. Ithaca campus April 20, 2026.

Spring in Ithaca. Mid 70s on Saturday, snow flurries yesterday and today.

1 day ago 1 0 0 0
Boots on the Ground
Boots on the Ground YouTube video by Massive Attack - Topic

Massive Attack. And Tom Waits.

I mean just holy shit y'all.
www.youtube.com/watch?v=kVTV...

5 days ago 221 73 7 12

Good to know the "say Chebyshev three times" proof technique on a alive and well outside numerical analysis, too.

3 days ago 3 0 0 0

Dear GOP: Please send Vance out to campaign for your candidates in the midterms.

1 week ago 546 88 7 4

I never worked with Joe on anything technical. But I respected him enormously, and I miss him as a senior colleague and a fundamental part of the department. It's not quite the same without him.

1 week ago 1 0 0 0

He became chair one year after I joined Cornell. I remember the students skit at a holiday party parodying the Game of Thrones, that he happily led.bbAnd I remember his advice at various times while I was an assistant professor.

1 week ago 1 0 1 0

Joe worked in logic, and distributed systems, and game theory, and so many other areas. He spoke fluently with mathematicians, computer scientists, philosophers, and economists.

1 week ago 2 0 1 0

I first met Joe when I came to visit Cornell in 2008. I remember having a great conversation with him about what he was working on around reasoning about knowledge.

1 week ago 2 0 1 0

This afternoon, we had a memorial seminar on the scientific legacy of Joe Halpern.

1 week ago 5 0 1 0
Advertisement

The overestimation of AI capabilities, this type of projection reflects science as a set of new widgets rather than about advancing human understanding.

1 week ago 3 0 0 0

Yes, don't build the Torment Nexus. But I am also thinking about the 2008 article "The End of Theory" by Chris Anderson of Wires, which felt like the same type of sci-fi extrapolation that reflects social values.

1 week ago 2 0 1 0

Lots of science fiction writing is as much (or more) about the some aspect of a cultural moment than it is about realistic projection. What's disturbing is when folks don't see the distinction.

1 week ago 2 0 1 0

But even if it's not the oldest thing that I've cited, it's a paper that is older than I am. And in the middle of dealing with ICML and the occasional reviewer who thinks anything older than 6 months must be ancient history ("not SOTA!"), I find this type of digging very satisfying.

2 weeks ago 1 0 0 0

I expect a reference to Linnainmaa will probably show up in an upcoming paper at some point, and this makes me happy. It's not the oldest reference that I'll have cited -- that honor might go to a paper by G.H. Bryan from 1890 (when I was working on gyroscopes, a fascinating story for another day).

2 weeks ago 0 1 1 0

The work of Linnainmaa (arising from his 1970 MS thesis) is the earliest that I was able to find that connected adjoint mode differentiation to rounding error analysis, though Linnainmaa points out that the idea of Taylor expansion in rounding errors does go back further (citing Henrici's textbook).

2 weeks ago 0 0 1 0

Chasing forward from there, I stumbled into a wonderful review article by Andreas Griewank, "Who Invented the Reverse Mode of Differentiation?" In the second paragraph of that paper, he comments that many Linnainmaa "says the idea came to him on a sunny afternoon in a Copenhagen park in 1970."

2 weeks ago 3 0 1 0

And then... I found the paper of Seppo Linnainmaa from 1976, and it all clicked. I read several other of Linnainmaa's papers when I took a floating point course from Velvel Kahan -- this was not in my bibliography, but I remembered it from that time.

2 weeks ago 0 1 1 0
Advertisement

So, I searched. There's an entertaining pattern in the literature of people who write "this is obvious if you think about it, and I didn't invent it, but I'm going to use it for X." There are phrases to this effect in the work of Iri, for example, and of Langlois.

2 weeks ago 0 0 1 0

Weirdly, though, the idea does not seem as well known in the numerical analysis community as I thought it was. So where did I learn it? I figured that it might be from Jim Demmel, Nick Higham, or Zhaojun Bai, but found no references (and Jim professed not knowing when I spoke with him).

2 weeks ago 0 0 1 0

This idea has shown up repeatedly in the literature in the past ten years or so in the context of PL-style tools for analyzing roundoff (Gopalkrishnan at Utah has nice recent work on this, for example), and also in systems like ADAPT for choosing intermediate precisions.

2 weeks ago 0 1 1 0

The idea is obvious if you know all the pieces. Adjoint variables measure sensitivity of an output to intermediates. Rounding errors perturb the computations of intermediates. Hence, we can estimate the effect of roundoff via the adjoint variables.

2 weeks ago 0 0 1 0

Anyhow: One of the early uses of automatic differentiation, which did stick with me from graduate school, was that it's easy to do the linearized rounding error analysis that we teach in intro numerics classes using adjoint variables.

2 weeks ago 0 0 1 0

(One of my takeaways from graduate school was that you can differentiate almost anything with respect to almost anything. I've made good use of this over the years. But I usually did the work by hand -- there were tools, but they weren't that easy to use yet.)

2 weeks ago 1 0 1 0

As a graduate student, I was aware of the literature in automatic differentiation, though I didn't pay it so much attention. Adjoint mode was part of the picture, but a lot of attention was spent on how to mix methods for best efficiency.

2 weeks ago 0 0 1 0

The idea of adjoint mode differentiation is essentially that one can compute dual variables that reflect the sensititivity of a final output with respect to every intermediate computation. The flow of information in the computation is "backward" from the usual flow fro inputs to outputs.

2 weeks ago 0 0 1 0
Advertisement

The idea of adjoint mode differentiation (aka back-propogation) goes back at least to some time in the 1960s, and probably before. The integration into all things ML has brought the idea back into the spotlight, but it is not a new idea.

2 weeks ago 1 1 1 0

I spent some of the spring break week catching up with various research collaborations and some of it dealing with a backlog of email and administrative tasks after two weeks of a lot of travel. In between, I did some of my favorite type of sleuthing, which is too good not to share.

2 weeks ago 6 3 1 0

So, yesterday we had two consecutive meetings scheduled over the lunch break, with food provided. Which made me wonder if that was the worst possible.

Turns out, yes: it is provably impossible to schedule more meetings in that time slot! This is called the No-Three-Lunch Theorem

2 weeks ago 27 2 2 1

Right now, I am happy to have anything in the news that makes me at all happy and excited.

2 weeks ago 1 0 0 0