New in Nature Health: how might we move towards a world in which race is not used in clinical algorithms? We need (1) careful comparison of race-aware and race-neutral algorithms and (2) systemic efforts to address underlying disparities.
Posts by Allison Koenecke
Care about preparing people to contribute responsibly to building the next generation of AI and technology?
Full-time (or at least 50%) lecturer position at Cornell Tech just posted, teaching computer science or related topics.
academicjobsonline.org/ajo/jobs/31698
It was in Colombia! bsky.app/profile/ckro...
CHI 2026 Speech AI for All: Apply by Feb 12
📢 Apply by Feb 12 to join our CHI 2026 workshop, Speech AI for All, where we'll discuss inclusive speech tech for people with speech diversities. Researchers, practitioners, policymakers, & community members welcome! speechai4all.org
We’re hiring a postdoc in AI & cancer care at UBC + BC Cancer!
Work on predictive + generative NLP to build a patient-centered cancer navigation assistant
Apply here: ubc.wd10.myworkdayjobs.com/ubcfacultyjo...
Do you know Zoom & other companies are encouraging discrimination against employees for things like "pausing too long" or not sounding sufficiently "charismatic"? Read all about our dystopian present in my new paper, out today in JASA!
(Seriously, read it. It's important)
doi.org/10.1121/10.0...
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.
New paper in Science:
In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.
🧵
Check out @isabelcorpus.bsky.social's fantastic thread on our paper studying the effects of a "write with AI" button on change.org! ✍️ Spoiler: the effects of AI aren't always positive.
We have a new paper in Science Advances proposing a simple test for bias:
Is the same person treated differently when their race is perceived differently?
Specifically, we study: is the same driver likelier to be searched by police when they are perceived as Hispanic rather than white?
1/
Had a great time at CODE@MIT this weekend, and wanted to highlight a few (of the many) cool talks!
For day 5 of the #30daymapchallenge, I compared the original ecology of New York City with data from the Welikia Project to the present-day.
I’m recruiting students this upcoming cycle at UIUC! I’m excited about Qs on societal impact of AI, especially human-AI collaboration, multi-agent interactions, incentives in data sharing, and AI policy/regulation (all from both a theoretical and applied lens). Apply through CS & select my name!
Photo of Cornelll University building surrounded by colorful trees
No better time to start learning about that #AI thing everyone's talking about...
📢 I'm recruiting PhD students in Computer Science or Information Science @cornellbowers.bsky.social!
If you're interested, apply to either department (yes, either program!) and list me as a potential advisor!
It was fantastic to collaborate across Cornell and Apple for our EMNLP paper auditing LLMs for dialectal biases in multiple choice benchmark datasets: arxiv.org/abs/2510.00962.
Anna @annaseogyeongchoi.bsky.social (who's on the job market this year!) did a great job presenting this work today!
Jobs! First, we hope to be hiring in Computer Science for the @cornelltech.bsky.social campus:
academicjobsonline.org/ajo/jobs/30804
Focus on security, SysML, and NLP.
Please share!
Cornell (NYC and Ithaca) is recruiting AI postdocs, apply by Nov 20, 2025! If you're interested in working with me on technical approaches to responsible AI (e.g., personalization, fairness), please email me.
academicjobsonline.org/ajo/jobs/30971
Cornell Information Science is hiring a Teaching Professor! Apply this week for full consideration:
academicjobsonline.org/ajo/jobs/30763
*Proud advisor moment* My (first) PhD student Zhi Liu (zhiliu724.github.io) is 1 of 4 finalists for the INFORMS Dantzig Dissertation Award, the premier dissertation award for the OR community. His dissertation spanned work with 2 NYC govt agencies, on measuring and mitigating operational inequities
Screenshot of paper abstract, with text: "A core ethos of the Economics and Computation (EconCS) community is that people have complex private preferences and information of which the central planner is unaware, but which an appropriately designed mechanism can uncover to improve collective decisionmaking. This ethos underlies the community’s largest deployed success stories, from stable matching systems to participatory budgeting. I ask: is this choice and information aggregation “worth it”? In particular, I discuss how such systems induce heterogeneous participation: those already relatively advantaged are, empirically, more able to pay time costs and navigate administrative burdens imposed by the mechanisms. I draw on three case studies, including my own work – complex democratic mechanisms, resident crowdsourcing, and school matching. I end with lessons for practice and research, challenging the community to help reduce participation heterogeneity and design and deploy mechanisms that meet a “best of both worlds” north star: use preferences and information from those who choose to participate, but provide a “sufficient” quality of service to those who do not."
New piece, out in the Sigecom Exchanges! It's my first solo-author piece, and the closest thing I've written to being my "manifesto." #econsky #ecsky
arxiv.org/abs/2507.03600
@jennahgosciak.bsky.social just gave a fantastic talk on this paper about temporally missing data at @ic2s2.bsky.social 🎉 -- find us this afternoon if you want to chat about it!
Check out our work at @ic2s2.bsky.social this afternoon during the Communication & Cooperation II session!
Presenting this work at @ic2s2.bsky.social imminently, in the LLMs & Society session!
For folks at @ic2s2.bsky.social, I'm excited to be sharing this work at this afternoon's session on LLMs & Bias!
This Thursday at @facct.bsky.social, @jennahgosciak.bsky.social's presenting our work at the 10:45am "Audits 2" session! We collaborated across @cornellbowers.bsky.social, @mit.edu, & @stanfordlaw.bsky.social to study health estimate biases from delayed race data collection: arxiv.org/abs/2506.13735
For folks at @facct.bsky.social, our very own @cornellbowers.bsky.social student @emmharv.bsky.social will present the Best-Paper-Award-winning work she led on Wednesday at 10:45 AM in the "Audit and Evaluation Approaches" session!
In the meantime, 🧵 below and 🔗 here: arxiv.org/abs/2506.04419 !
You've been too busy 🀄izing bias in other contexts!
Many thanks to the researchers who have inspired our work!! (14/14) @valentinhofmann.bsky.social @jurafsky.bsky.social @haldaume3.bsky.social @hannawallach.bsky.social @jennwv.bsky.social @diyiyang.bsky.social and many others not yet on Bluesky!
We encourage practitioners to use our dataset (github.com/brucelyu17/S...) to audit for biases before choosing an LLM to use, and developers to investigate diversifying training data and research tokenization differences across Chinese variants. (13/14)
Table (with rows for each tested LLM) showing that the number of tokens for names in Simplified Chinese is, in nearly all cases, significantly different than the number of tokens for each of the same names translated into Traditional Chinese (with 1-to-1 character replacement).
This is likely due to differences in tokenization between Simplified Chinese and Traditional Chinese. The exact same names, when translated between language settings, result in significantly different numbers of tokens when represented in each of the models. (12/14)
Similar figure as plot (6/14), but subset to a set of six names, containing three of the same first names but duplicated when written in both Simplified and Traditional Chinese. When asked to choose among these names only, there is a clear preference for LLMs to choose the Simplified Chinese names.
But, written character choice (in Traditional or Simplified) seems to be the primary driver of LLM preferences. Conditioning on the same names (which have different characters in Traditional vs. Simplified), we can flip our results & get majority Simplified names selected (11/14)