Advertisement · 728 × 90

Posts by Allison Koenecke

Video

New in Nature Health: how might we move towards a world in which race is not used in clinical algorithms? We need (1) careful comparison of race-aware and race-neutral algorithms and (2) systemic efforts to address underlying disparities.

4 weeks ago 21 9 1 3
Cornell University, Computer Science Job #AJO31698, Lecturer/Senior Lecturer, Computer Science, Cornell University, New York, New York, US

Care about preparing people to contribute responsibly to building the next generation of AI and technology?

Full-time (or at least 50%) lecturer position at Cornell Tech just posted, teaching computer science or related topics.

academicjobsonline.org/ajo/jobs/31698

2 months ago 12 10 0 2

It was in Colombia! bsky.app/profile/ckro...

2 months ago 7 0 0 0
CHI 2026 Speech AI for All: Apply by Feb 12

CHI 2026 Speech AI for All: Apply by Feb 12

📢 Apply by Feb 12 to join our CHI 2026 workshop, Speech AI for All, where we'll discuss inclusive speech tech for people with speech diversities. Researchers, practitioners, policymakers, & community members welcome! speechai4all.org

3 months ago 7 2 0 0
LinkedIn This link will take you to a page that’s not on LinkedIn

We’re hiring a postdoc in AI & cancer care at UBC + BC Cancer!
Work on predictive + generative NLP to build a patient-centered cancer navigation assistant

Apply here: ubc.wd10.myworkdayjobs.com/ubcfacultyjo...

3 months ago 1 1 2 0
Socially prescriptive speech technologies: Linguistic, technical, and ethical issues Speech technology tools can be powerful and transformative for individuals, businesses, and governments. Socially prescriptive speech technology (SPST) systems

Do you know Zoom & other companies are encouraging discrimination against employees for things like "pausing too long" or not sounding sufficiently "charismatic"? Read all about our dystopian present in my new paper, out today in JASA!
(Seriously, read it. It's important)
doi.org/10.1121/10.0...

4 months ago 198 72 4 10
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

New paper in Science:

In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.

🧵

4 months ago 153 68 5 3

Check out @isabelcorpus.bsky.social's fantastic thread on our paper studying the effects of a "write with AI" button on change.org! ✍️ Spoiler: the effects of AI aren't always positive.

4 months ago 23 6 0 0
Post image

We have a new paper in Science Advances proposing a simple test for bias:

Is the same person treated differently when their race is perceived differently?

Specifically, we study: is the same driver likelier to be searched by police when they are perceived as Hispanic rather than white?

1/

4 months ago 44 16 2 1
Advertisement

Had a great time at CODE@MIT this weekend, and wanted to highlight a few (of the many) cool talks!

5 months ago 17 5 1 1

For day 5 of the #30daymapchallenge, I compared the original ecology of New York City with data from the Welikia Project to the present-day.

5 months ago 11 1 2 0
Post image

I’m recruiting students this upcoming cycle at UIUC! I’m excited about Qs on societal impact of AI, especially human-AI collaboration, multi-agent interactions, incentives in data sharing, and AI policy/regulation (all from both a theoretical and applied lens). Apply through CS & select my name!

5 months ago 41 18 1 0
Photo of Cornelll University building surrounded by colorful trees

Photo of Cornelll University building surrounded by colorful trees

No better time to start learning about that #AI thing everyone's talking about...

📢 I'm recruiting PhD students in Computer Science or Information Science @cornellbowers.bsky.social!

If you're interested, apply to either department (yes, either program!) and list me as a potential advisor!

5 months ago 23 9 1 0

It was fantastic to collaborate across Cornell and Apple for our EMNLP paper auditing LLMs for dialectal biases in multiple choice benchmark datasets: arxiv.org/abs/2510.00962.

Anna @annaseogyeongchoi.bsky.social (who's on the job market this year!) did a great job presenting this work today!

5 months ago 17 1 0 0
Cornell University, Computer Science Job #AJO30804, Professor Positions - Computer Science, Cornell Tech, Computer Science, Cornell University, New York, New York, US

Jobs! First, we hope to be hiring in Computer Science for the @cornelltech.bsky.social campus:

academicjobsonline.org/ajo/jobs/30804

Focus on security, SysML, and NLP.

Please share!

6 months ago 17 12 1 2
Cornell University, Empire AI Fellows Program Job #AJO30971, Postdoctoral Fellow, Empire AI Fellows Program, Cornell University, New York, New York, US

Cornell (NYC and Ithaca) is recruiting AI postdocs, apply by Nov 20, 2025! If you're interested in working with me on technical approaches to responsible AI (e.g., personalization, fairness), please email me.

academicjobsonline.org/ajo/jobs/30971

5 months ago 32 20 1 2
Cornell University, Information Science Job #AJO30763, 2025-2026 CORNELL INFORMATION SCIENCE FULL-TIME TEACHING FACULTY SEARCH (OPEN-RANK TEACHING PROFESSOR), ITHACA CAMPUS  , Information Science, Cornell University, Ithaca, New York, US

Cornell Information Science is hiring a Teaching Professor! Apply this week for full consideration:

academicjobsonline.org/ajo/jobs/30763

5 months ago 34 22 1 0
Advertisement
Zhi Liu About me

*Proud advisor moment* My (first) PhD student Zhi Liu (zhiliu724.github.io) is 1 of 4 finalists for the INFORMS Dantzig Dissertation Award, the premier dissertation award for the OR community. His dissertation spanned work with 2 NYC govt agencies, on measuring and mitigating operational inequities

7 months ago 29 3 1 0
Screenshot of paper abstract, with text: "A core ethos of the Economics and Computation (EconCS) community is that people have complex private preferences and information of which the central planner is unaware, but which an appropriately designed mechanism can uncover to improve collective decisionmaking. This ethos underlies the community’s largest deployed success stories, from stable matching systems to participatory budgeting. I ask: is this choice and information aggregation “worth it”? In particular, I discuss how such systems induce heterogeneous participation: those already relatively advantaged are, empirically, more able to pay time costs and navigate administrative burdens imposed by the mechanisms. I draw on three case studies, including my own work – complex democratic mechanisms, resident crowdsourcing, and school matching. I end with lessons for practice and research, challenging the community to help reduce participation heterogeneity and design and deploy mechanisms that meet a “best of both worlds” north star: use preferences and information from those who choose to participate, but provide a “sufficient” quality of service to those who do not."

Screenshot of paper abstract, with text: "A core ethos of the Economics and Computation (EconCS) community is that people have complex private preferences and information of which the central planner is unaware, but which an appropriately designed mechanism can uncover to improve collective decisionmaking. This ethos underlies the community’s largest deployed success stories, from stable matching systems to participatory budgeting. I ask: is this choice and information aggregation “worth it”? In particular, I discuss how such systems induce heterogeneous participation: those already relatively advantaged are, empirically, more able to pay time costs and navigate administrative burdens imposed by the mechanisms. I draw on three case studies, including my own work – complex democratic mechanisms, resident crowdsourcing, and school matching. I end with lessons for practice and research, challenging the community to help reduce participation heterogeneity and design and deploy mechanisms that meet a “best of both worlds” north star: use preferences and information from those who choose to participate, but provide a “sufficient” quality of service to those who do not."

New piece, out in the Sigecom Exchanges! It's my first solo-author piece, and the closest thing I've written to being my "manifesto." #econsky #ecsky
arxiv.org/abs/2507.03600

8 months ago 44 9 2 3

@jennahgosciak.bsky.social just gave a fantastic talk on this paper about temporally missing data at @ic2s2.bsky.social 🎉 -- find us this afternoon if you want to chat about it!

8 months ago 8 0 0 0

Check out our work at @ic2s2.bsky.social this afternoon during the Communication & Cooperation II session!

8 months ago 10 1 0 0

Presenting this work at @ic2s2.bsky.social imminently, in the LLMs & Society session!

8 months ago 11 1 0 0

For folks at @ic2s2.bsky.social, I'm excited to be sharing this work at this afternoon's session on LLMs & Bias!

8 months ago 10 1 0 0

This Thursday at @facct.bsky.social, @jennahgosciak.bsky.social's presenting our work at the 10:45am "Audits 2" session! We collaborated across @cornellbowers.bsky.social, @mit.edu, & @stanfordlaw.bsky.social to study health estimate biases from delayed race data collection: arxiv.org/abs/2506.13735

9 months ago 17 1 1 0
Advertisement

For folks at @facct.bsky.social, our very own @cornellbowers.bsky.social student @emmharv.bsky.social will present the Best-Paper-Award-winning work she led on Wednesday at 10:45 AM in the "Audit and Evaluation Approaches" session!

In the meantime, 🧵 below and 🔗 here: arxiv.org/abs/2506.04419 !

9 months ago 16 2 1 0

You've been too busy 🀄izing bias in other contexts!

9 months ago 2 0 1 0

Many thanks to the researchers who have inspired our work!! (14/14) @valentinhofmann.bsky.social @jurafsky.bsky.social @haldaume3.bsky.social @hannawallach.bsky.social @jennwv.bsky.social @diyiyang.bsky.social and many others not yet on Bluesky!

9 months ago 1 0 0 0
Preview
GitHub - brucelyu17/SC-TC-Bench: [FAccT '25] Characterizing Bias: Benchmarking LLMs in Simplified versus Traditional Chinese [FAccT '25] Characterizing Bias: Benchmarking LLMs in Simplified versus Traditional Chinese - brucelyu17/SC-TC-Bench

We encourage practitioners to use our dataset (github.com/brucelyu17/S...) to audit for biases before choosing an LLM to use, and developers to investigate diversifying training data and research tokenization differences across Chinese variants. (13/14)

9 months ago 2 0 1 0
Table (with rows for each tested LLM) showing that the number of tokens for names in Simplified Chinese is, in nearly all cases, significantly different than the number of tokens for each of the same names translated into Traditional Chinese (with 1-to-1 character replacement).

Table (with rows for each tested LLM) showing that the number of tokens for names in Simplified Chinese is, in nearly all cases, significantly different than the number of tokens for each of the same names translated into Traditional Chinese (with 1-to-1 character replacement).

This is likely due to differences in tokenization between Simplified Chinese and Traditional Chinese. The exact same names, when translated between language settings, result in significantly different numbers of tokens when represented in each of the models. (12/14)

9 months ago 1 0 1 0
Similar figure as plot (6/14), but subset to a set of six names, containing three of the same first names but duplicated when written in both Simplified and Traditional Chinese. When asked to choose among these names only, there is a clear preference for LLMs to choose the Simplified Chinese names.

Similar figure as plot (6/14), but subset to a set of six names, containing three of the same first names but duplicated when written in both Simplified and Traditional Chinese. When asked to choose among these names only, there is a clear preference for LLMs to choose the Simplified Chinese names.

But, written character choice (in Traditional or Simplified) seems to be the primary driver of LLM preferences. Conditioning on the same names (which have different characters in Traditional vs. Simplified), we can flip our results & get majority Simplified names selected (11/14)

9 months ago 1 0 1 0