Advertisement · 728 × 90

Posts by Uri Shalit

Gelman has something to this effect, starting from bottom of p.6 here (“There are (almost) no true zeros”)

bayes.cs.ucla.edu/BOOK-2K/gelm...

6 months ago 3 0 1 0
Vacancies

We are hiring a *PhD student* in my group to work on machine learning generalization "out-of-table". Help build methods that learn from large volumes of tabular data to generate models for new tasks! Apply here: www.chalmers.se/en/about-cha...

11 months ago 7 4 1 2
Preview
AI as Normal Technology

In a new essay from our "Artificial Intelligence and Democratic Freedoms" series, @randomwalker.bsky.social & @sayash.bsky.social make the case for thinking of #AI as normal technology, instead of superintelligence. Read here: knightcolumbia.org/content/ai-a...

1 year ago 38 17 1 6

They are graded on giving full proofs. LLMs are quite bad at that.

Though I’m sure there’s also quite a bit of test data leaks going around

1 year ago 1 0 0 0

indeed!

1 year ago 0 0 0 0
Preview
[REPOST] Epistemic Learned Helplessness [This is a slightly edited repost of an essay from my old LiveJournal] A friend recently complained about how many people lack the basic skill of believing arguments. That is, if you have a valid a…

Not a big SSC fan but I liked his idea of epistemic learned helplessness

slatestarcodex.com/2019/06/03/r...

1 year ago 2 0 1 0

Have you worked recently on a cool topic 📚 and now you think it is time to teach it to the whole #UAI2025 community 🗣️?

If so, submit a proposal to give a #Tutorial in #Rio 🇧🇷!

👉 www.auai.org/uai2025/call...

🕓 deadline: Apr 14, 2025

1 year ago 13 8 0 1
Advertisement

Not going for exhaustive!

1 year ago 3 0 1 0

Tangentially related: interesting to think of the biblical law of jubilee in this context. It says that every 50 years debts are dropped, indentured servants released and land is returned to “original” owners

1 year ago 4 0 0 0
New developments in the origin of life on Earth « Math Scholar

This piece from August seems like a good writeup:
mathscholar.org/2024/08/new-...

It mentions a new book from 2024 by Jack Szostak and Mario Livio called “Is Earth Exceptional? The Quest for Cosmic Life”.
Szostak is a leading scientist in the field (I haven’t read the book yet)

1 year ago 3 0 0 0
Preview
AI existential risk probabilities are too unreliable to inform policy How speculation gets laundered through pseudo-quantification

This is a good, longish essay on the subject
by @randomwalker.bsky.social and @sayash.bsky.social

www.aisnakeoil.com/p/ai-existen...

1 year ago 4 1 0 0
Post image

Hi everyone, amazing student Marah Ghoummaid and myself are presenting our work “When to Act and When to Ask: Policy Learning With Deferral Under Hidden Confounding” at #NeurIPS2024 today!

Come talk to us 11:00 - 2:00, West Ballroom, poster #5106!

Paper: openreview.net/forum?id=taI...

1 year ago 21 0 0 0
Detailed hand-drawn sketch of a Purkinje cell, showing its dendritic structure

Detailed hand-drawn sketch of a Purkinje cell, showing its dendritic structure

I love this sketch of a Purkinje cell (a type of neuron found in the cerebellum) by Santiago Ramón y Cajal

1 year ago 7 3 0 0

to be fair, ICLR is a far cry from what Yann is suggesting in that piece. I'm not questioning the need for reform. My question is why didn't the push for reform succeed back then, and what can we learn from that? (in the spirit of "Everyone will not just")

1 year ago 1 0 0 0
Advertisement
Proposal for A New Publishing Model in Computer Science Yann LeCun's Home Page

I recall that Yann LeCun had some interesting suggestions back in 2013-2014. Despite his clout the community didn't move much* yann.lecun.com/ex/pamphlets...

*ICLR public reviews and TMLR are small steps which I think followed from the discussions going around back then

1 year ago 2 0 1 0

This is such a good point, and I love the connection you're making in the paper to resilience to hidden confounding. In many cases the treatments that would shift are exactly those that have more "exogenous randomness" in them, and for these units the effect might be more easily identified from data

1 year ago 2 0 0 0

super interesting!

1 year ago 1 0 0 0
A black and white photo of a very intimidating looking Yuri Knorozov scowling wearing a dark Soviet suit and holding a beautiful light-furred cat in his arms

A black and white photo of a very intimidating looking Yuri Knorozov scowling wearing a dark Soviet suit and holding a beautiful light-furred cat in his arms

Breaking the Maya Code, by Michael Coe, about the deciphering of Mayan script

One of the people involved in the story is Yuri Knorozov, pictured below

1 year ago 4 0 1 0

Instead some scientists just said “close schools!” , conflating their own priorities with science and hurting the credibility of scientists overall.

Their intentions were good but I think the overall outcome is not

2/2

1 year ago 2 0 2 0

an example where I think some scientists stumbled: during COVID after the first few months, imo a responsible scientist would say “closing schools has these benefits and these harms (w/uncertainty), the politicians and public should weigh them and decide” 1/2

1 year ago 1 0 1 0

NeurIPS Conference is now Live on Bluesky!

-NeurIPS2024 Communication Chairs

1 year ago 277 69 11 6
Preview
Normality and actual causal strength Existing research suggests that people’s judgments of actual causation can be influenced by the degree to which they regard certain events as normal. …

Related to this, been enjoying this paper by Icard, @jfkominsky.bsky.social & Knobe looking at how "normality" affects the way humans judge causes.
eg in when you need two factors to cause an event (say oxygen + match to cause a fire), humans will judge the less "normal" element to be more causal

1 year ago 8 1 0 0
Advertisement

I'm now getting a much better signal-to-noise for ML discussions here than on Xitter plus funnier/more profound shitposting, and much, much less rage inducing screaming and general junk

1 year ago 0 0 0 0

Feeling much nicer here

1 year ago 0 0 1 0

I don’t know about other domains, but in healthcare I’ve seen the term used to basically mean “a model of how a patient would respond to a treatment other than the one they’ve actually received”. When used in that sense it’s just corpo ai brainwash as @natolambert.bsky.social said

1 year ago 4 0 1 0

While I think this is a great paper, I also think that the focus on causal features (which is only part of what the paper is about) is a bit of red herring

bsky.app/profile/uris...

1 year ago 1 0 0 0

OTOH consider a severe headache. While the pain itself is probably not immediately causal, it’s a strong and stable symptom of underlying conditions and this it’s a stable feature. Indeed almost any classic diagnosis of disease by symptoms is anti-causal yet stable
(3/3)

1 year ago 3 0 1 0

E.g. consider the time of day someone goes into an ER. That might influence who sees them and how quickly which will influence many downstream outcomes causally. But the specifics of this effect will vary wildly between different ERs making this an unstable feature
(2/3)

1 year ago 3 0 1 0

To be fair, I think there’s no strong reason to think that causal features are a priori more stable than others
(1/3)

1 year ago 2 0 1 1
Advertisement

I’d like to hear your spicy ML takes

1 year ago 5 0 0 0