Job alert! Come and work with us at @oii.ox.ac.uk. We’re recruiting a Postdoctoral Researcher working with @bmittelstadt.bsky.social and @cruss.bsky.social. Full-time position, starts 1 October 2025. Closing date for applications: noon, 30 July. Apply today: bit.ly/3TuJlGc #hiring
Posts by Chris Russell
One week left to submit your application!
Apply to work with Prof Sandra Wachter at the Hasso Plattner Institute and collaborate with me and Chris Russell at the Oxford Internet Institute, University of Oxford.
@swachter.bsky.social
@hpi.bsky.social
@cruss.bsky.social
@oii.ox.ac.uk
Are you interested in the governance of emergent tech?
Come & work w/ me @bmittelstadt.bsky.social & @cruss.bsky.social
We are looking for 3 Post Docs in
Law: tinyurl.com/4rbhcndp
Ethics: tinyurl.com/yc2e2km4
Computer Science/AI/ML: tinyurl.com/yr5bvnn5
Application deadline is June 15, 2025.
See our recent FAccT paper for analysis of how many of these models are for generating nonconsensual sexual imagery arxiv.org/pdf/2505.03859
Still time to apply to work with me and @bmittelstadt.bsky.social and @cruss.bsky.social @oii.ox.ac.uk
New! Latest study from @oii.ox.ac.uk reveals a concerning trend: easily accessible AI tools designed to create deepfake images, primarily targeting women, are rapidly proliferating. Read more: bit.ly/4kc1iVk 1/5
Come & work with me @hpi.bsky.social & @bmittelstadt.bsky.social & @cruss.bsky.social @oii.ox.ac.uk
I am looking for 3 post docs on the governance of emergent tech.
CS: tinyurl.com/yr5bvnn5
Ethics: tinyurl.com/yc2e2km4
Law: tinyurl.com/4rbhcndp
Application deadline is 15.06.2025.
Out now in #AIRe, the Journal of AI Law and Regulation, my new editorial discussing the state of research on fairness in AI in an increasingly hostile geopolitical climate, and the need for European leadership going forward.
Open access link: doi.org/10.21552/air...
#AI #DEI @oii.ox.ac.uk
The 4th Monocular Depth Estimation Challenge (MDEC) is coming to #CVPR2025, and I’m excited to join the org team! After 2024’s breakthroughs in monodepth driven by generative model advances in transformers and diffusion, this year's focus is on OOD generalization and evaluation.
Anyone interested can talk to Eoin Delany at poster 5502, or check out the paper for more details arxiv.org/abs/2407.13710. Great work by my co-authors, Eoin, Zihao Fu, @swachter.bsky.social and @bmittelstadt.bsky.social
Diagram showing the combination of two heads.
The trick is model surgery on a validation set. We train a multi-head model, the first head solves the original task and the other heads predict groups using a squared loss. A weighted sum of all these heads can enforce any fairness definition, and has the same architecture as the original net
Cartoon logo of an ox and scales
An example showing how to enforce minimum group recall in computer vision.
New fairness toolkit at #NeurIPS today. This fixes most of the problems I've run into in the field.
It is robust to overfitting, works for #NLP and computer vision, and can enforce any definition of fairness that can be written as a function of a confusion matrix. t.ly/ZpRJ-
How do we do that....
Sorry, not this year. Maybe next time.
This is a common problem with LLMs if the temperature is set to zero. It might just be that these small models need a higher temperature.