Advertisement · 728 × 90

Posts by Tim G. J. Rudner

The deadline to submit papers to ProbML's proceedings and workshop tracks is **March 20**!

📣Check out the CfP and submit here: probml.cc/call/!

1 month ago 5 7 0 0

++ Major News ++

AABI is now ProbML: the Symposium on Probabilistic Machine Learning! Very excited about this!

ProbML will be co-located with ICML in Seoul!

Check out our new website: probml.cc!

2 months ago 19 5 0 0
Post image

📢The Information Society Project (@yaleisp.bsky.social) at Yale Law School is recruiting a new batch of *Resident Fellows*!

It's a great community and a good opportunity for anyone interested in the intersection of *AI governance and the law*.

Deadline: Dec 31
Apply: law.yale.edu/isp/join-us#...

6 months ago 3 0 0 0
Preview
[Research or Senior] Fellow - Frontier AI | Center for Security and Emerging Technology The Center for Security and Emerging Technology (CSET) is currently seeking candidates to lead our Frontier AI research efforts, either as a Research Fellow or Senior Fellow (depending on experience)....

📢 Exciting opportunity:

@csetgeorgetown.bsky.social is hiring a Research or Senior Fellow to help lead their **frontier AI policy research efforts**.

I've been working with CSET since 2019 and continue to be impressed by the quality and impact of CSET's work!

cset.georgetown.edu/job/research...

6 months ago 3 0 0 0
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference) YouTube video by Lawfare

Today's Lawfare Daily is a @scalinglaws.bsky.social episode, produced with @utexaslaw.bsky.social, where @kevintfrazier.bsky.social spoke to @gushurwitz.bsky.social and @neilchilson.bsky.social about how academics can overcome positively contribute to the work associated with AI governance.

6 months ago 6 3 0 0

Beautiful paper!

6 months ago 1 0 0 0

It was a pleasure speaking at @yaleisp.bsky.social yesterday!

6 months ago 2 0 0 0

Tomorrow’s ISP Ideas Lunch update:

We’re excited to host @timrudner.bsky.social (U. Toronto & Vector Institute). He’ll speak on “formal guarantees” in AI + key AI safety concepts!

6 months ago 1 1 1 0

I'm thrilled to join the Schwartz Reisman Institute for Technology and Society as a Faculty Affiliate!

8 months ago 4 0 0 0
Advertisement
Post image

Congrats! CDS PhD Student Vlad Sobal, Courant PhD Student Kevin Zhang, CDS Faculty Fellow timrudner.bsky.social, CDS Profs @kyunghyuncho.bsky.social and @yann-lecun.bsky.social, and Brown's Randall Balestriero won the Best Paper Award at ICML's 'Building Physically Plausible World Models' Workshop!

8 months ago 1 1 1 0
Post image Post image Post image Post image

CDS Faculty Fellow @timrudner.bsky.social served as general chair for the 7th Symposium on Advances in Approximate Bayesian Inference, held April alongside ICLR 2025.

The symposium explored connections between probabilistic machine learning and AI safety, NLP, RL, and AI for science.

9 months ago 3 1 0 0

Congratulations again!

9 months ago 1 0 0 0

Congratulations Umang!

11 months ago 1 0 0 0
Preview
AI in Military Decision Support: Balancing Capabilities with Risk CDS Faculty Fellow Tim G. J. Rudner and colleagues at CSET outline responsible practices for deploying AI in military decision-making.

CDS Faculty Fellow Tim G. J. Rudner (@timrudner.bsky.social) and colleagues at CSET — @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell — examine responsible AI deployment in military decision-making.

Read our post on their policy brief: nyudatascience.medium.com/ai-in-milita...

11 months ago 2 1 0 0

The result in this paper I'm most excited about:

We showed that planning in world model latent space allows successful zero-shot generalization to *new* tasks!

Project website: latent-planning.github.io

Paper: arxiv.org/abs/2502.14819

11 months ago 7 0 0 0

#1: Can Transformers Learn Full Bayesian Inference In Context? with @arikreuter.bsky.social @timrudner.bsky.social @vincefort.bsky.social

11 months ago 6 1 0 0

Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI 🤖🧠

11 months ago 19 2 1 0
Post image

Congratulations to the #AABI2025 Proceedings Track Best Paper Award recipients!

11 months ago 10 1 0 0
Advertisement
Post image

Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!

11 months ago 20 8 0 1
Post image

We concluded #AABI2025 with a panel discussion on

**The Role of Probabilistic Machine Learning in the Age of Foundation Models and Agentic AI**

Thanks to Emtiyaz Khan, Luhuan Wu, and @jamesrequeima.bsky.social for participating!

11 months ago 10 3 1 0
Post image

.@jamesrequeima.bsky.social gave the third invited talk of the day at #AABI2025!

**LLM Processes**

11 months ago 5 2 0 0
Post image

Luhuan Wu is giving the second invited talk of the day at #AABI2025!

**Bayesian Inference for Invariant Feature Discovery from Multi-Environment Data**

Watch it on our livestream: timrudner.com/aabi2025!

11 months ago 3 2 0 0
Post image

Emtiyaz Khan is giving the first invited talk of the day at #AABI2025!

11 months ago 7 2 0 0
Post image

We just kicked off #AABI2025 at NTU in Singapore!

We're livestreaming the talks here: timrudner.com/aabi2025!

Schedule: approximateinference.org/schedule/

#ICLR2025 #ProbabilisticML

11 months ago 10 4 0 1
Preview
AABI 2025 · Luma 7th Symposium on Advances of Approximate Bayesian Inference (AABI) https://approximateinference.org/schedule

Make sure to get your tickets to #AABI2025 if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic ML, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#ProbabilisticML #Bayes #UQ #ICLR2025 #AABI2025

1 year ago 6 2 0 0
Post image

Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#Bayes #MachineLearning #ICLR2025 #AABI2025

1 year ago 17 8 0 1
Advertisement
Preview
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...

CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...

1 year ago 4 2 0 0
Preview
How the U.S. Public and AI Experts View Artificial Intelligence These groups are far apart in their enthusiasm and predictions for AI, but both want more personal control and worry about too little regulation.

A great Pew Research survey:

"How the U.S. Public and AI Experts View Artificial Intelligence"

Everyone working in ML should read this and ask themselves why experts and non-experts have such divergent views about the potential of AI to have a positive impact.

www.pewresearch.org/internet/202...

1 year ago 10 3 2 1

This is an excellent article!

Steering foundation models towards trustworthy behaviors is one of the most important research directions today.

Helen is a deep and rigorous thinker, and you should definitely subscribe to her Substack!

1 year ago 4 1 0 0

I'm super excited to see our #CSET report on **AI-enabled military decision support systems** being released today!

Great work by @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell!

1 year ago 0 1 0 0