I saw this, wanted to dunk on it, but was too lazy to read it. So thank you for doing it.
Posts by Thomas Steinke
In my mind millennials still means young people, but in reality the youngest millennials are turning 30 this year.
bsky.app/profile/radi...
This is a challenging legal problem for NeurIPS (and other conference participants)! You might be wondering how this is possible given the First Amendment?
I wrote a quick explainer on the current status quo of relevant First Amendment cases & law to get you up to speed.
🔗👇
My parents are the opposite: They check in, drop bags, & then go wait at home.
"Boarding is in 10min."
"OK, let's eat dessert."
"No, we need to drive to the airport now!"
"Relax, they have your bags; they won't leave without you."
*30 min later*
"Final call for passenger Steinke."
Stressmaxxing...
The injected prompts asked the LLM to include certain phrases in the review. These phrases seemed quite generic to me and they could appear organically. So I'm not entirely sure how they can be confident that these desk rejections aren't false positives.
When I say this is a mess, I mean I have no idea how this situation is going to evolve. Conference organizers are facing a deluge of papers and of poor-quality reviews. I really don't know how they should handle this. They're clearly experimenting here (which is good, but also risky).
There's a weird caveat: Reviewers are allowed to use LLMs if they accept a policy that allows their own papers to be LLM-reviewed. Basically, ICML was running an experiment with two tracks -- one forbidding LLM reviews and the other permitting them.
ICML requires authors to serve as reviewers. Reviewers aren't supposed to use LLMs (with caveats 🧵). ICML used prompt injection to catch reviewers using LLMs. And now they have desk rejected the papers submitted by those reviewers.
What a mess...
Schools have ruined St Patrick's and Valentine's days for parents.
E.g., this morning I had to prepare lunch for a random other kid in my kid's class. What does this have to do with St. Patrick?
Which task should I do first?
(i) Send reminder emails to late reviewers.
(ii) Submit my own late reviews.
https://truthsocial.com/@realDonaldTrump/posts/116189925042301817
https://truthsocial.com/@realDonaldTrump/posts/116227904143399817
https://www.bbc.com/news/articles/c9dn3j04lydo Trump accuses Starmer of seeking to 'join wars after we've already won't 7 days ago
https://www.bbc.com/news/live/ckg1w1jp8kjt Trump urges UK and other nations to send ships to help secure Strait of Hormuz after Iranian attacks LIVE
Started/going
Google translate has been using LLMs (or their precursors) for almost a decade. And I think it's hard to argue that this service isn't useful.
Screenshot of bsky post with author's identity not shown. [I'm not trying to pile on.] Sure, LLMs are useful for: 1. Fraud 2. Plagiarism 3. Cognitive off-loading Which of those use-cases are you promoting? 3:59pm March 10 2026 51 reposts, 14 quotes, 257 likes, 6 saves
Some people are still debating whether or not LLMs are "useful,"
so let's stake out one clear use case: LLMs are useful for translating between languages.
That includes translating between natural languages (e.g. Spanish to English) and, more recently, to formal languages (e.g. English to Python).
It's interesting how much you end up learning about your own papers when preparing a talk about them. Condensing things into slides really clarifies your own understanding.
When citing a conference publication, it's important to include the date and location of the conference it was presented at for the benefit of readers with a time machine.
I just discovered that someone blocked me almost certainly because of this post. Lol. 😂
🇺🇦
Owning a vineyard is just farming for rich people.
We want to evaluate $$ \sum_{\color{red}k=0}^\infty (\color{red}k+1) \color{blue}p^{\color{red}k}\,. $$ Introduce the function $f$, for $|\color{blue}x|<1$: $$ f(\color{blue}x) = \sum_{\color{red}k=0}^\infty \color{blue}x^{\color{red}k}\,. $$ That's a nice geometric series, and we easily get $f(\color{blue}x) = \frac{1}{1-\color{blue}x}$. So we can differentiate that: $$ f'(\color{blue}x) = \frac{1}{(1-\color{blue}x)^2} $$ But $f$ was defined as a power series, and we can also differentiate *that* termwise: $$ f'(\color{blue}x) = \sum_{\color{red}k=1}^\infty \color{red}k \color{blue}x^{\color{red}{k-1}} = \sum_{\color{red}k=0}^\infty {(\color{red}k+1)} \color{blue}x^{\color{red}{k}}\,. $$ Well, $f'(\color{blue}x)= f'(\color{blue}x)$ (!), so we can use both expressions, and evaluate them at $\color{blue}p$: $$ \boxed{\sum_{\color{red}k=0}^\infty {(\color{red}k+1)} \color{blue}p^{\color{red}{k}} = \frac{1}{(1-\color{blue}p)^2}} $$
Let's say you want, e.g., to compute the expectation of a Geometric r.v. That'll involve, at some point, evaluating a series of the form "Σ (k+1) p^k" which looks like what Lovecraft may have done to a geometric series. How to do it?
One trick I enjoy: differentiate the same function, in two ways!
Picture of a car license plate with the number obscured. The plate says. CHAHTA SIA HOKE! In God We Trust Choctaw Nation of OKLAHOMA
I like spotting license plates from interesting states. This one was notably interesting.
The conclusion to this story is that I planned to return the empty package, but someone threw it in the recycling before I did so. $20 down the drain...
I asked Gemini to reformat my algorithm for {algorithm2e}. It looks ugly, but it works.
I confess that I think it's actually a good meme, but the people who post it tend to misunderstand who they are and where they are in the cycle.
Anthropic and OpenAI are both losing money and they know this can't go on forever, but they are locked in a race to develop better and better AI. OpenAI is trying to make some money with ads. Anthropic is basically looking at OpenAI, smiling, and asking "you feeling tired already?"
Proprietary models could effectively disappear. Not sure why that would happen, but some people want to heavily regulate them
Your #NeurIPS2026 reviews ask you to compare to five papers on clawxiv.org
What do you do?
Thank you for your comments that add nuance and depth to the discussion, while also confirming my high level understanding of the paper.
Thanks for confirming my beliefs about the paper. (I haven't read beyond the headline.)