Advertisement · 728 × 90

Posts by Csaba Szepesvari

ARLET A simple, whitespace theme for academics. Based on [*folio](https://github.com/bogoli/-folio) design.

Stay tuned for updates by following us or the organizers:
Alberto Metelli, @antoine-mln.bsky.social , Dirk van der Hoeven, Felix Berkenkamp, Francesco Trovò, @gioramponi.bsky.social Marco Mussi, @skiandsolve.bsky.social , and @tillfreihaut.bsky.social

8 months ago 5 1 0 0

..actually, not only standard notation, but also to be able to speak about the loss (=log-loss) used to train today's LLMs.

9 months ago 0 0 0 0

No, it is not information retrieval. It is deducing new things from old things. You can do this by running a blind breadth-first (unintelligent) search producing all proofs of all possible statements. Just don't want errors. But this is not retrieval. It is computation.

9 months ago 1 0 0 0

Of course approximations are useful. The paper is narrowly focused on deductive reasoning which seem to require the exactness we talk about. The point is that regardless of whether you use quantum mechanics or the Newtonian one, you don't want your derivations mistake-ridden.

9 months ago 2 0 0 0

Worst-case vs. average case: yes!
But I would not necessarily connect these to minimax vs. Bayes.

9 months ago 0 0 0 0

Yeah, admittedly, not a focus point of the paper. How about if the model produces a single response, the loss is the zero-one loss. Then the model better choose the label with the highest probability label, which is OK. Point of having mu: Not much point, just matching standard notation..

9 months ago 0 0 1 0

I am curious about these examples.. (and yes, I can construct a few, too, but I want to add more)

9 months ago 0 0 0 0

No, this is not correct: Learning 1[A>B] interestingly has the same complexity (provably). This is because 1[A>B] is in the "orbit" of 1[A>=B]. So the symmetric learning who is being taught 1[A>B] need to figure out it is not taught 1[A>=B].

9 months ago 0 0 1 0

Maybe. I am asking for much less here from the machines. I am asking for them just to be correct (or stay silent). No intelligence, just good old fashioned computation.

9 months ago 0 0 1 0
Advertisement

the solution is found..

9 months ago 0 0 0 0

Yes, transformers do not have "working memory". Also, I don't believe in that using them in AR mode is powerful enough for challenging problems. In a way, without "working memory", external "loop", we say the model should solve problems by free association ad infinitum or at least until

9 months ago 1 0 1 0

On the paper: Interesting but indeed there is little in common. On the problem studied in the paper: Would not a slightly more general statistical framework solve your problem? Ie measure error differently than through the prediction loss (AR models: parameters, spectral measure, etc.).

9 months ago 0 0 0 0

Yeah, I don't see the exactness happening that much on its own through statistical learning. Neither experimentally, nor theoretically. We have an example for illustrating this: use the uniform distribution for good coverage, teach transformers to compare m-bit integers using GD. Need 2^m examples.

9 months ago 0 0 3 0

Yeah, we cite this and this was a paper that got me started on this project!

9 months ago 1 0 0 0
Preview
Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence Sound deductive reasoning -- the ability to derive new knowledge from existing facts and rules -- is an indisputably desirable aspect of general intelligence. Despite the major advances of AI systems ...

First position paper I ever wrote. "Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence" arxiv.org/abs/2506.23908 Background: I'd like LLMs to help me do math, but statistical learning seems inadequate to make this happen. What do you all think?

9 months ago 52 9 4 1

Our seminars are back. If you missed Max's talk, it is on YouTube and today I will host Jeongyeol from UWM who will talk about the curious case of why latent MDPs though scary at first sight might be tractable! Link to the seminar homepage:
sites.google.com/view/rltheor...

11 months ago 23 3 0 0

Glad to see someone remembers these:)

1 year ago 7 0 0 0

should be distinguished. The reason they should not is because they are indistinguishable. So at least those need to be collapsed. So yes, one can start with redundant models, where it will appear you could have epistemic uncertainty, but this is easy to rule out. 2/2

1 year ago 0 0 0 0
Advertisement

I guess with a worst-case hat on, we just all die:) In other words, indeed, the distinction is useful inasmuch as the modelling assumptions are valid. And there the mixture of two Diracs over 0 and 1 actually is a bad example, because that says that two models that are identical as distributions 1/x

1 year ago 0 0 1 0

I guess I stop here:) 5/5

1 year ago 0 0 0 0

Well, yes, to the degree that the model you use correctly reflects what's going on. Example with drug trials, randomized patient allocation. Result is effectiveness. Meaning of aleatoric and epistemic uncertainty should be clear and they help with explaining outcomes of the trial. 4/x

1 year ago 0 0 1 0

One observes 1, there is epistemic uncertainty (the model could be the first or the second). Of course, nothing is black and white like this ever. And we talk about models here. Models are.. made up.. Usual blurb about usefulness of models. Should you care about this distinction? 3/x

1 year ago 0 0 1 0

Epistemic uncertainty refers to whether given the data (and prior information), we can surely identify the data generating model. Example: Model class has two distributions; one has support {0,1}, the other has support {1}. One observes 0. There is no epistemic uncertainty. 2/X

1 year ago 0 0 1 0

I don't get this:
In the context of this terminology, data comes from a model. Aleatoric uncertainty refers to the case when this model is a Dirac! In the second case, the model is a mixture of two Dirac's. This is not a Dirac. Hence, there is aleatoric uncertainty. 1/X

1 year ago 0 0 1 0
NSERC - Latest News - Launch of the new Harmonized Tri-agency Scholarship and Fellowship programs As announced in Budget 2024, the scholarship and fellowship programs administered by the three federal research funding agencies – the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC) – have been streamlined into a new harmonized talent program called the Canada Research Training Awards Suite (CRTAS) that will open for applications in summer 2025.

This is a very significant development - more fellowships, harmonized and typically higher stipends, and international students can apply

#CanPoli

www.nserc-crsng.gc.ca/NewsDetail-D...

1 year ago 38 16 2 3
Advertisement

Dylan J. Foster, Zakaria Mhammedi, Dhruv Rohatgi: Is a Good Foundation Necessary for Efficient Reinforcement Learning? The Computational Role of the Base Model in Exploration https://arxiv.org/abs/2503.07453 https://arxiv.org/pdf/2503.07453 https://arxiv.org/html/2503.07453

1 year ago 5 5 1 0

But also we are how we act! So it's up to us all to behave so as to make statement true.

1 year ago 0 0 0 0
Preview
From the nonononoyes community on Reddit: He was there for a while Explore this post and more from the nonononoyes community

Who says mountain car is a toy problem? www.reddit.com/r/nonononoye...

1 year ago 7 0 0 0

Yes, another gem from Rich!

1 year ago 1 0 0 0
TURING AWARD WINNER Richard S. Sutton in Conversation with Cam Linke | No Authorities in Science
TURING AWARD WINNER Richard S. Sutton in Conversation with Cam Linke | No Authorities in Science YouTube video by Amii

www.youtube.com/watch?v=9_Pe... An interview with Rich. The humility of Rich is truly inspiring: "There are no authorities in science". I wish people would listen and live by this.

1 year ago 40 13 2 1