Advertisement · 728 × 90

Posts by Hidde Fokkema

I am not particularly bullish, but I do think that Google will eventually pull ahead, they are the only player that is not entirely dependent on large cash injections through their other revenue streams and I think that eventually the other companies will not be able to cater to their investors

3 weeks ago 1 0 0 0

Go check out the package I wrote for the robust regression and estimation procedure Mathie Gerber and @pierrealquier.bsky.social developed based on the Maximum Mean Discrepancy principle!

1 month ago 3 1 0 0
Preview
2 PhD Positions on Learning Causally Grounded Concepts for Safe AI Are you interested in improving the interpretability, robustness and safety of AI by integrating causal reasoning? The Causality team in the AMLab group at the University of Amsterdam is looking for 2...

🚨2 PhD positions with me @amlab.bsky.social on learning causally grounded concepts 🚨

Are you interested in improving the #interpretability #robustness and #safety of AI by integrating #causal reasoning? Join us in beautiful Amsterdam 🇳🇱🌷🚲

Deadline: 20 April

www.academictransfer.com/en/jobs/3593...

1 month ago 20 14 0 1
Post image

Excited for Damien’s seminar talk this Thursday!🚀

Steering is an exciting area in interpretability—but how strongly should we steer?

Damien will present a theory of steering strength: choosing the right magnitude of representation change—not too weak, not too strong
tverven.github.io/tiai-seminar/

2 months ago 2 1 0 2
Post image

At #NeurIPS in San Diego this week? Interested in XAI, causality, or performative prediction? Come visit our poster!

💬 Performative Validity of Recourse Explanations
📆 Wednesday, 4.30 pm, Poster Session 2
w/ Hidde Fokkema, Timo Freiesleben, Celestine Mendler-Dünner, Ulrike von Luxburg

4 months ago 11 3 0 0
The Central Challenge in Explainable AI: Channel Capacity Explainable AI is about communication: we want to tell people how or whya machine learning model is making certain decisions. Why is this sodifficult? In this post I take an information-theoretic pers...

When reading a large literature, it is really helpful to have opinionated opinions that help to categorize papers.

One of mine for explainable AI is that methods need to address a fundamental limit in how much information can be communicated.

Blog post: www.timvanerven.nl/blog/xai-com... (no math)

5 months ago 8 4 0 0
Ockham's Razor and Bayesian Analysis on JSTOR William H. Jefferys, James O. Berger, Ockham's Razor and Bayesian Analysis, American Scientist, Vol. 80, No. 1 (January-February 1992), pp. 64-72

I did some googling and this article has a surprisingly nice and pedagogical discussion on this, with a similar conclusion to your idea.

tinyurl.com/52a3whac

And I found that I missed the opportunity to make the joke that the posterior of the simpler model is "Sharper", keeping the razor theme.

5 months ago 1 0 1 0

(2/2) if we see the complicated model and simple model as 2 different hypothesis classes, with 2 seperate priors, then the posterior for the more complicated class will be flatter than the posterior of the simple class, which is what you want I think.

5 months ago 1 0 1 0
Advertisement

(1/2) Fair point, I think my point was that anything Bayesian is prior related. So with the correct prior you could at least recover Ockham's razor, but not really derive it. But my thinking is a bit different I think, as in my points above the hypothesis classes are the same.

In your idea, if ..

5 months ago 2 0 1 0

(7/n=7) So, in the end, you can get Ockham's razor if your prior is that simple explanations (read explanations with less parameters) are more likely than complicated ones. For binary parameters you could write the prior explicitly. For real valued parameters this becomes impossible (I am guessing)

5 months ago 2 0 1 0

(6/n) Now if you really want to derive Ockham's razor, in the sense of minimum assumptions, or really the number of parameters, you would need a prior distribution that assigns more probability mass to simple models.

5 months ago 1 0 1 0

(5/n) Similarly, if β ~ Laplace(0, σ^2), then you get the Lasso objective

min ||y - <β, x> ||^2 + λ || β ||_1

where we now have the 1-norm as regularization penalty. This one has the added benefit that irrelevant parameters are set to 0, which resembles the original Ockham's razor principle more

5 months ago 1 0 1 0

(4/n) writing out the posterior likelihood and performing maximum likelihood on the parameters (Maximum a postiori Bayesian inference). How much you regularise is determined by σ, and there is relation to λ.

5 months ago 2 0 1 0

(3/n) Let's say we consider as possible models all linear models and complexity measure the euclidean norm of the parameters. (This is ridge regresions). Then, we would retrieve the optimisation problem:

min ||y - <β, x>||^2 + λ||β||^2

By assuming that β ~ N(0, σ^2) and ...

5 months ago 2 0 1 0

(2/n) In particular, this would give you the model with the least amount of assumptions, if you consider 2 models that explain the data equally well, but one has less assumptions and that is the complexity measure you consider.

5 months ago 2 0 1 0

Sure! Here are some thoughts

(1/n) I would see Ockham's razor as the following optimisation problem:

min Error(data, model) + Complexity(model)

Where you minimise over all models.

5 months ago 2 0 1 0

If you see Ockham's razor as a regularization mechanism, because you optimize to fit the data and minimizing the parameters, then there are explicit connection. For example ridge regression follows from assuming a gaussian prior on the parameters and Lasso regression follows from a Laplace prior

5 months ago 2 0 1 0
Preview
Theory of XAI Workshop Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...

Interested in provable guarantees and fundamental limitations of XAI? Join us at the "Theory of Explainable AI" workshop Dec 2 in Copenhagen! @ellis.eu @euripsconf.bsky.social

Speakers: @jessicahullman.bsky.social @doloresromerom.bsky.social @tpimentel.bsky.social

Call for Contributions: Oct 15

6 months ago 8 5 0 2
Advertisement
Post image

4th AI & Mathematics in NL workshop in Tilburg.

Many cool presentations: aimath.nl/index.php/20...

And great people:

10 months ago 14 3 0 0

Deadlines for PhD and Postdoc vacancies coming up: applications open until Monday June 2!

10 months ago 5 5 0 1
Tim van Erven Tim van Erven’s website

Now open: vacancies for a PhD or Postdoc position to develop "Mathematical Foundations for Explainable AI" with me.

This is a new research direction that I am very excited about, and which will really start to take off over the next few years.

Come join my group: www.timvanerven.nl#open-phd-and...

1 year ago 16 5 0 3
Preview
Two PhD positions on Flexible and User-adaptive statistical inference - Looking for a job that matters? You will develop a mathematical framework for multiple testing, enabling flexibility in study design, analysis, and model choice while retaining strong error guarantees. This means that researchers ca...

Jelle Goeman (Leiden University Medical Center) and I (University of Twente) have two PhD positions on e-values and multiple testing. The students will be co-supervised by both of us. A strong theoretical mathematical background is required.

utwentecareers.nl/en/vacancies...

1 year ago 8 4 0 0
Preview
NWO Vici grants awarded to six UvA/AMC researchers Six UvA and AMC academics have been awarded Vici grants worth up to €1.5 million by the Dutch Research Council (NWO), to pursue research into topics ranging from black holes to combatting obesity. The...

Hooray, I received a Vici grant from the Dutch science foundation!

Heads up for current PhD students in learning theory: I will have two postdoc positions available in Amsterdam on "learning theory for interpretable/explainable AI" in the coming years.

www.uva.nl/shared-conte...

1 year ago 32 5 5 1

Reposting to see if we can get some input on what the community is eager to see in the seminar:

1 year ago 6 3 0 0

Great talk by Jeremias Sulam in the interpretable AI seminar today, connecting feature and concept interpretability to hypothesis testing via E-values!

Recording available in our YouTube channel: youtu.be/cx7wTtRdhnA

Check out the seminar website for upcoming talks: tverven.github.io/tiai-seminar/

1 year ago 5 2 0 0
Advertisement

Ohw and the timing of the Q2B conference being this week probably also factors in. So they can hype it a bit more there

1 year ago 3 0 0 0

My guess would be because the nature version of the article was just published?

1 year ago 3 0 1 0
Post image

📢Theory of Interpretable AI Seminar📢

On Thursday, Depen Morwani from Harvard will present work on how **margin maximization** can explain observed phenomena in mechanistic interpretability!

⏲️Thursday Dec 5th, 4pm CET / 10am EST
🌐https://tverven.github.io/tiai-seminar/

1 year ago 3 1 0 1

Aren't these dual numbers? I think Julia has some autodiff packages based in this idea

1 year ago 2 0 1 0