Advertisement · 728 × 90

Posts by Chris Russell

Post image

Job alert! Come and work with us at @oii.ox.ac.uk. We’re recruiting a Postdoctoral Researcher working with @bmittelstadt.bsky.social and @cruss.bsky.social. Full-time position, starts 1 October 2025. Closing date for applications: noon, 30 July. Apply today: bit.ly/3TuJlGc #hiring

9 months ago 5 6 0 0

One week left to submit your application!

Apply to work with Prof Sandra Wachter at the Hasso Plattner Institute and collaborate with me and Chris Russell at the Oxford Internet Institute, University of Oxford.

@swachter.bsky.social
@hpi.bsky.social
@cruss.bsky.social
@oii.ox.ac.uk

10 months ago 4 2 0 0
Postdoctoral Researcher (m/f/x) in Technology and Regulation Postdoctoral Researcher (m/f/x) in Technology and Regulation

Are you interested in the governance of emergent tech?

Come & work w/ me @bmittelstadt.bsky.social & @cruss.bsky.social

We are looking for 3 Post Docs in
Law: tinyurl.com/4rbhcndp
Ethics: tinyurl.com/yc2e2km4
Computer Science/AI/ML: tinyurl.com/yr5bvnn5

Application deadline is June 15, 2025.

10 months ago 10 14 0 3

See our recent FAccT paper for analysis of how many of these models are for generating nonconsensual sexual imagery arxiv.org/pdf/2505.03859

11 months ago 29 5 0 0

Still time to apply to work with me and @bmittelstadt.bsky.social and @cruss.bsky.social @oii.ox.ac.uk

11 months ago 3 3 0 0
Preview
OII | Dramatic rise in publicly downloadable deepfake image generators New Oxford study uncovers explosion of accessible deepfake AI image generation models intended for the creation of non-consensual, sexualised images of women.

New! Latest study from @oii.ox.ac.uk reveals a concerning trend: easily accessible AI tools designed to create deepfake images, primarily targeting women, are rapidly proliferating. Read more: bit.ly/4kc1iVk 1/5

11 months ago 8 4 1 1
Postdoctoral Researcher (m/f/x) in Machine Learning and Artificial Intelligence Postdoctoral Researcher (m/f/x) in Machine Learning and Artificial Intelligence

Come & work with me @hpi.bsky.social & @bmittelstadt.bsky.social & @cruss.bsky.social @oii.ox.ac.uk

I am looking for 3 post docs on the governance of emergent tech.

CS: tinyurl.com/yr5bvnn5
Ethics: tinyurl.com/yc2e2km4
Law: tinyurl.com/4rbhcndp

Application deadline is 15.06.2025.

11 months ago 12 11 0 3
Advertisement
Preview
Editorial

Out now in #AIRe, the Journal of AI Law and Regulation, my new editorial discussing the state of research on fairness in AI in an increasingly hostile geopolitical climate, and the need for European leadership going forward.

Open access link: doi.org/10.21552/air...

#AI #DEI @oii.ox.ac.uk

1 year ago 16 5 1 1
Post image

The 4th Monocular Depth Estimation Challenge (MDEC) is coming to #CVPR2025, and I’m excited to join the org team! After 2024’s breakthroughs in monodepth driven by generative model advances in transformers and diffusion, this year's focus is on OOD generalization and evaluation.

1 year ago 22 3 1 1
Preview
OxonFair: A Flexible Toolkit for Algorithmic Fairness We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard...

Anyone interested can talk to Eoin Delany at poster 5502, or check out the paper for more details arxiv.org/abs/2407.13710. Great work by my co-authors, Eoin, Zihao Fu, @swachter.bsky.social and @bmittelstadt.bsky.social

1 year ago 3 2 0 0
Diagram showing the combination of two heads.

Diagram showing the combination of two heads.

The trick is model surgery on a validation set. We train a multi-head model, the first head solves the original task and the other heads predict groups using a squared loss. A weighted sum of all these heads can enforce any fairness definition, and has the same architecture as the original net

1 year ago 0 0 1 0
Cartoon logo of an ox and scales

Cartoon logo of an ox and scales

An example showing how to enforce minimum group recall in computer vision.

An example showing how to enforce minimum group recall in computer vision.

New fairness toolkit at #NeurIPS today. This fixes most of the problems I've run into in the field.
It is robust to overfitting, works for #NLP and computer vision, and can enforce any definition of fairness that can be written as a function of a confusion matrix. t.ly/ZpRJ-
How do we do that....

1 year ago 2 0 1 0

Sorry, not this year. Maybe next time.

1 year ago 0 0 1 0

This is a common problem with LLMs if the temperature is set to zero. It might just be that these small models need a higher temperature.

2 years ago 0 0 0 0
Advertisement