A small library that packages code from my recent blog posts to simplify PyTorch experiments on small datasets and small models that fit into (GPU) memory. Just `pip install fitstream`
code: github.com/alexshtf/fit...
docs: fitstream.readthedocs.io/en/stable/
Posts by Alex Shtoff
I started exploring the idea of using matrix eigenvalues as the "nonlinearity" in models, and wrote a second post in the series where I explore the scaling, robustness and interpretability properties of this kind of models. The key - feature matrix spectral norms.
alexshtf.github.io/2026/01/01/S...
Or distance from KKT conditions.
4) convergence of deviation from optimality conditions.
I heard that polynomials are the (complex) root of all evil.
Nicely written blog post by David Eppstein on the Boyer–Moore (deterministic) streaming algorithm to find a majority element in a stream, and its extensions, first to the turnstile model, and then to frequency estimation (Misra–Gries).
11011110.github.io/blog/2025/05... via @theory.report
The Matrix Mortality Problem asks if a given set of square matrices can multiply to the zero matrix after a finite sequence of multiplications of elements. It is is undecidable for matrices of size 3x3 or larger. buff.ly/lLmvvlo
Attending #ICLR2025?
Visit our poster!
A stochastic approach to the subset selection problem via mirror descent.
Today, 3pm, poster #336.
Indistinguishable from magic*
Don't care who uses meta services. Don't like when people invent imaginary threats to their privacy and spread them. Don't want to use - dont use. Want to opt out and use - dont be afraid they won't comply. Meta is afraid of the legal and reputational consequences. That's my opinion.
Used to work for Yahoo. Not a giant like Meta, but also use plenty of user data to make money. Not complying with regulation was always a big no no. These companies are very afraid of the legal and reputational consequences. So I wouldn't be afraid they won't comply.
The phenomenal paper "epigraphical analysis" by Attouch and Wets was the basis for my Ph.D thesis. It was fun digging deep into epi-convergence.
Yes, I understand. They should cite and criticize it.
Why simply not cite the directly relevant prior work?
A question to the #math people here. For differential equations there are spectral methods that find approximate solutions in the span of orthogonal bases. Is there a variant for difference equations, and bases of sequences? A good tutorial maybe?
The Tarski-Seidenberg theorem in logical form states that the set of first-order formulas over the real numbers is closed under quantifier elimination. This means any formula with quantifiers can be converted into an equivalent quantifier-free formula. perso.univ-rennes1.fr/michel.coste...
🚨New post🚨
@beenwrekt.bsky.social recently started a bit of noise with his post about nonexistence of overfitting, but he has a point. In this post we explore it using simple polynomial curve fitting, *without regularization*, using another interesting basis.
alexshtf.github.io/2025/03/27/F...
What makes it a method for "fine-tuning LLMs" rather than a method for fine-tuning any neural network in general?
Is it true that log(1+exp(x)) is the infimum of the quadratic upper bound over a?
If so - it also has interesting consequences.
Or maybe there's cultural difference of the black people, who may be more afraid of not returning a loan and may do extreme things, such as using the last of their savings, to return it.
This paper seems to focus too much on estimation, and ignores the complexities of modeling.
As beautiful as I can remember it.
You can get away without - theory papers.
Models reflect training data. Training data reflects people.
Models are just fancy autocomplete.
People have free will, and are not fancy autocomplete of what a model has showed them.
The numpy function there doesn't use SGD. To the best of my knowledge, it uses QR decomposition.
Anyway, things get interesting when the degree becomes 200 :)
Linear regression with Legendre polynomials:
colab.research.google.com/drive/1phA7N...
Inspired by Ben's post about nonexistent overfitting, to convince my coworkers.
Stochastic people.
But, somehow, there is a "uniform prior" over the integers, according to some people I met :)
People underappreciate work that "just works" with current software stacks.
Great paper!
For example, spectral methods are good at solving problems where it yields a linear system in the coefficients.
Isn't it like this in, say, numerical differential equations?