Advertisement · 728 × 90

Posts by Noah Stephens-Davidowitz

Preview
AlphaGo and Artificial Intelligence On Friday, March 11th the world’s best Go player, Lee Sedol, lost the third game in a row of a five game match to Google DeepMind’s AlphaGo program. Far from being just games, AlphaGo&#…

10 years ago today: my blog post on AlphaGo and Artificial Intelligence, which I wrote in grad school: hdbennett.wordpress.com/2016/03/18/a....

1 month ago 4 3 0 0

March is pi month!

1 month ago 2 0 0 0
Post image

I don't think this is relevant cryptographically. The DEL25 result that they're comparing to is far from the state of the art. (DEL25 were mostly interested in a different parameter regime.) And the claimed running time of this new attack is way higher than the claimed security of Dilithium.

1 month ago 3 0 1 1

Me, writing a proof of a "the following are equivalent" theorem:

"(2) implies (3) is trivial"

No, that's not supercilious enough

"Only a nincompoop would fail to see that (2) implies (3)"

No, that one would never get past the Editor

"(2) implies (3) is, of course, trivial"

Perfect

1 month ago 14 2 1 0
A picture of Joe Halpern smiling in green shirt in front of a blue background.

A picture of Joe Halpern smiling in green shirt in front of a blue background.

Today arXiv remembers our colleague Joe Halpern, who was instrumental in founding arXiv's CS section.

Joe's passions ranged far & wide and we're lucky that arXiv was one of them. Joe, thank you for giving so much to arXiv - you are missed.

blog.arxiv.org/2026/02/27/remembering-joe-halpern

1 month ago 53 14 2 2

I wrote a short expository note about a beautiful result of Carmosino, Gao, Impagliazzo, Mihajlin,
Paturi, and Schneider for certifying NO instances of 3-SUM in roughly n^{3/2} time, beating the fastest known, roughly n^2-time deterministic algorithm: home.cs.colorado.edu/~hbennett/no.... 1/

2 months ago 21 3 2 0

I think what I'm proposing just achieves the best of all worlds here? You can write f(n) <= g(n) + O(h(n)) and maintain the flexibility of big-O notation while achieving the clarity of an inequality rather than a "one-way equality."

3 months ago 0 0 0 0

But, Knuth prefers big-O because it's cumbersome to use << to write, e.g., f(n) = g(n) + O(h(n)). One would have to write f(n) - g(n) << h(n), which would be annoying in more complicated settings if one wants to write something like f(n) = g(n) + h_1(n) = g(n) + O(h_2(n)) or something.

3 months ago 0 0 1 0

He then says that the notation f(n) << g(n) (or similar) has the huge benefit that it is clear and doesn't yield the weird "one-way equalities" that one gets from f(n) = O(g(n)).

3 months ago 0 0 1 0
Advertisement

Thanks for the link!

It looks like Knuth says that they're best thought of as sets, but we should still write equalities and not set inclusion because that's what people are used to.

3 months ago 1 0 1 0

Much butter, and also much better :).

3 months ago 0 0 0 0

I feel like that's technically incorrect, but not in a way that is likely to be misunderstood. So, it's much butter than saying "if m is O(n^2)" to mean "if m >= C n^2", which is both incorrect and likely to be misunderstood.

3 months ago 0 0 2 0

I know that this is not an original idea. It seems that many authors use this notation already, at least in some contexts. I have not seen anyone advocate for its widespread adoption, so I thought I would.

3 months ago 1 0 0 0

A more subtle example is something like f(n) = n^2 + O(n). This sometimes means f(n) <= n^2 + O(n), but it often means |f(n) - n^2| <= O(n). Note that using inequalities here again removes this ambiguity.

3 months ago 0 0 1 0

This sometimes removes ambiguity. E.g., f(n) = poly(n) is ambiguous. This can mean that f(n) is upper bounded by a polynomial in n, that it's lower bounded, or both. I think one should write f(n) <= poly(n), f(n) >= poly(n), and f(n) = poly(n) to distinguish these cases.

3 months ago 0 0 1 0

For more complicated expressions like f(n) <= 1/(1-O(1/x)) or whatever, the inequality is useful to help the reader know whether an upper bound or a lower bound is implied by the notation.

3 months ago 2 0 1 0

I think this notation makes it much clearer what we mean. It also solves the annoying problem that something like "f(n) = O(n^2)" is an abuse of the equals sign, which pedants (like me) often complain about.

3 months ago 0 0 1 0
Advertisement
A simple and modest proposal for improving asymptotic notation | Solipsist's Log

I wrote up a little blog post proposing a slightly different way to write asymptotic notation. www.solipsistslog.com/a-simple-and...

In short, I think asymptotic notation should usually be written with an INequality. E.g., f(n) <= O(n^2), f(n) < o(log n), f(n) > 2^{-o(n)}, f(n) >= n^{-O(1)}, etc.

3 months ago 13 5 4 0

On brand!

3 months ago 2 0 0 0
Post image

To kick off 2026, here's a quick thread on a few mountain ascents from 2025, starting at home in Boulder, Colorado with Mount Sanitas (6,798') at night. 1/4

3 months ago 10 1 2 0

This isn't exactly what you want, but the composition series seems closely related.

4 months ago 0 0 1 0

Let me finish with a fun technical tool: We show an AM protocol that *upper* bounds the image size of a circuit C. This is kind of dual to the celebrated Goldwasser-Sipster protocol, which gives an AM protocol that *lower* bounds this (or any NP set). This seems likely to have other applications!

4 months ago 1 0 0 0

Avoid could be such a problem itself. However, our work doesn't rule out a reduction from a hard decision problem that lies in AM ∩ co-AM---even a well-studied problem like, say, factoring.

4 months ago 0 0 1 0

A key subtlety here is the distinction between search and decision. As a trippy example, there exist search problems A such that (1) every *decision* problem that reduces to A is efficiently solvable; but (2) A itself is not efficiently solvable. (It's a fun exercise to come up with such a problem.)

4 months ago 0 0 1 0
Advertisement

For example, this immediately implies that the (co-NP-complete) problem of verifying solutions to Avoid does *not* reduce to Avoid unless PH collapses, which is just trippy! (Lots of things about Avoid are quite trippy!)

4 months ago 0 0 1 0

We show that this is unlikely. For example, we show that NP-hardness of Avoid would collapse the polynomial hierarchy (to AM). More generally, we show that any decision problem that reduces to Avoid is in AM ∩ co-AM. (This works even for adaptive, randomized reductions.)

4 months ago 1 0 1 0

Basically, Korten showed that an algorithm would allow us to derandomize *many* randomized constructions.

However, this left open the question of whether we simply reduce a well-studied presumed hard problem to it to show that Avoid is hard. Ideally, we'd like to show NP-hardness.

4 months ago 0 0 1 0

Avoid is not only weird, but also important. E.g., Korten and Jeřábek showed that it is in FP^{NP} if and only if E^{NP} requires 2^{\Omega(n)}-size circuits!! And, Korten showed that an algorithm for Avoid would yield nearly optimal two-source extractors, rigid matrices, hard truth tables, etc.

4 months ago 0 0 1 0

I find this problem to be endlessly fascinating because of how weird it is.

For example, Avoid is clearly hard in some sense, since just *verifying* a solution is coNP-complete. On the other hand, Avoid is clearly easy in some sense, since a random string is a solution with probability 1/2.

4 months ago 0 0 1 0

The input to Avoid is a circuit C : {0,1}^n -> {0,1}^{n+1}. Since C is expanding, there must be some string (in fact, many strings) y \in {0,1}^{n+1} that are not in the image of C. The goal is to find such a y that is not in the image.

4 months ago 0 0 1 0