Advertisement · 728 × 90

Posts by Alex Shtoff

Post image

A small library that packages code from my recent blog posts to simplify PyTorch experiments on small datasets and small models that fit into (GPU) memory. Just `pip install fitstream`

code: github.com/alexshtf/fit...
docs: fitstream.readthedocs.io/en/stable/

2 months ago 0 0 0 0
Preview
Robustness, interpretability, and scaling of eigenvalue models We discuss mathematical properties that relate to the robustness and interpretability of eigenvalues as models, and demonstrate those by training on a tabular dataset. We obtain a family that improves...

I started exploring the idea of using matrix eigenvalues as the "nonlinearity" in models, and wrote a second post in the series where I explore the scaling, robustness and interpretability properties of this kind of models. The key - feature matrix spectral norms.

alexshtf.github.io/2026/01/01/S...

3 months ago 0 0 0 0

Or distance from KKT conditions.

8 months ago 0 0 0 0

4) convergence of deviation from optimality conditions.

8 months ago 1 0 1 0

I heard that polynomials are the (complex) root of all evil.

9 months ago 1 0 0 0
Turnstile majority A famous algorithm of Boyer and Moore for the majority problem finds a majority element in a stream of elements while storing only two values, a single tenta...

Nicely written blog post by David Eppstein on the Boyer–Moore (deterministic) streaming algorithm to find a majority element in a stream, and its extensions, first to the turnstile model, and then to frequency estimation (Misra–Gries).
11011110.github.io/blog/2025/05... via @theory.report

11 months ago 18 3 1 0
Post image

The Matrix Mortality Problem asks if a given set of square matrices can multiply to the zero matrix after a finite sequence of multiplications of elements. It is is undecidable for matrices of size 3x3 or larger. buff.ly/lLmvvlo

11 months ago 6 3 1 0
Advertisement

Attending #ICLR2025?
Visit our poster!
A stochastic approach to the subset selection problem via mirror descent.
Today, 3pm, poster #336.

11 months ago 0 0 0 0

Indistinguishable from magic*

11 months ago 1 0 0 0

Don't care who uses meta services. Don't like when people invent imaginary threats to their privacy and spread them. Don't want to use - dont use. Want to opt out and use - dont be afraid they won't comply. Meta is afraid of the legal and reputational consequences. That's my opinion.

1 year ago 0 0 1 0

Used to work for Yahoo. Not a giant like Meta, but also use plenty of user data to make money. Not complying with regulation was always a big no no. These companies are very afraid of the legal and reputational consequences. So I wouldn't be afraid they won't comply.

1 year ago 0 0 1 0

The phenomenal paper "epigraphical analysis" by Attouch and Wets was the basis for my Ph.D thesis. It was fun digging deep into epi-convergence.

1 year ago 2 0 0 0

Yes, I understand. They should cite and criticize it.

1 year ago 0 0 1 0

Why simply not cite the directly relevant prior work?

1 year ago 0 0 1 0

A question to the #math people here. For differential equations there are spectral methods that find approximate solutions in the span of orthogonal bases. Is there a variant for difference equations, and bases of sequences? A good tutorial maybe?

1 year ago 0 0 0 0
Post image

The Tarski-Seidenberg theorem in logical form states that the set of first-order formulas over the real numbers is closed under quantifier elimination. This means any formula with quantifiers can be converted into an equivalent quantifier-free formula. perso.univ-rennes1.fr/michel.coste...

1 year ago 6 2 0 0
Advertisement
Post image

🚨New post🚨

@beenwrekt.bsky.social recently started a bit of noise with his post about nonexistence of overfitting, but he has a point. In this post we explore it using simple polynomial curve fitting, *without regularization*, using another interesting basis.

alexshtf.github.io/2025/03/27/F...

1 year ago 1 0 0 0

What makes it a method for "fine-tuning LLMs" rather than a method for fine-tuning any neural network in general?

1 year ago 0 0 1 0

Is it true that log(1+exp(x)) is the infimum of the quadratic upper bound over a?
If so - it also has interesting consequences.

1 year ago 0 0 0 0
Post image

Or maybe there's cultural difference of the black people, who may be more afraid of not returning a loan and may do extreme things, such as using the last of their savings, to return it.

This paper seems to focus too much on estimation, and ignores the complexities of modeling.

1 year ago 1 0 0 0

As beautiful as I can remember it.

1 year ago 1 0 0 0

You can get away without - theory papers.

1 year ago 1 0 0 0

Models reflect training data. Training data reflects people.
Models are just fancy autocomplete.

People have free will, and are not fancy autocomplete of what a model has showed them.

1 year ago 0 0 0 0

The numpy function there doesn't use SGD. To the best of my knowledge, it uses QR decomposition.
Anyway, things get interesting when the degree becomes 200 :)

1 year ago 0 0 0 0
Preview
Google Colab

Linear regression with Legendre polynomials:
colab.research.google.com/drive/1phA7N...

Inspired by Ben's post about nonexistent overfitting, to convince my coworkers.

1 year ago 3 0 1 0

Stochastic people.

1 year ago 0 0 0 0
Advertisement

But, somehow, there is a "uniform prior" over the integers, according to some people I met :)

1 year ago 1 0 1 0

People underappreciate work that "just works" with current software stacks.
Great paper!

1 year ago 1 0 1 0

For example, spectral methods are good at solving problems where it yields a linear system in the coefficients.

1 year ago 1 0 0 0

Isn't it like this in, say, numerical differential equations?

1 year ago 1 0 1 0