You can read more about this role and several other open roles on our Technology Team here: www.aclu.org/careers/
Please share widely!
Posts by Marissa Kumar Gerchick
This is a full-time, hybrid-role based in either NYC, San Francisco, or Washington, DC, with an annual salary of $142k (if based in NYC - see the posting for more details on regional pay adjustments).
The Tech Team at ACLU is hiring! We are looking for a Data Scientist with expertise in NLP and AI ethics to work on using language tech to support ACLU's mission. Come help us tackle questions about how AI systems can be carefully applied to support the public interest. www.aclu.org/careers/appl...
It's been a journey of nearly 3 years, but I'm very excited to announce the CNTR AISLE Portal! 🚀 cntr-aisle.org It’s a new way to review and evaluate the 1,000+ AI bills introduced in the U.S. over the last three years. Check out the Bill Library and our Profiles#AIPolicy #OpenData
Many thanks to Michelle Lipinski and Mona Sloane for their guidance and wisdom throughout the process! You can learn more about the series below. (As a disclaimer, I’m authoring this work in my personal capacity, not affiliated with my work at the ACLU). www.ucpress.edu/series/co-op...
I’ve decided to share some of these reflections now, in the hopes that they may resonate with others, on a substack where I'll reflect on the process of writing about AI right now and stories on how AI and women's sports are intersecting. The first piece is up now and I’d love to hear your thoughts!
At several points in the process, I considered abandoning the project as I grappled with bouts of “AI-induced imposter syndrome,” wrestling with the tradeoffs of spending time and energy writing a book at a time when the world’s relationship to writing feels like it is rapidly changing.
What started as a determined fan trying to make sense of an outcome I didn't like became a much deeper exploration of how AI is shaping the ways women's sports are played, watched, and refereed. And now, I’m excited to share that exploration has turned into a book project.
As a researcher who studies the societal impacts of AI, and a huge USWNT fan, I channeled my heartbreak into a familiar method: a deep dive into the underlying technology, how it was designed, deployed, and evaluated.
The margin, arguably unseeable to the human eye, was nonetheless legible to the automated goal line-technology used to make the deciding call.
One morning in August 2023, I woke up at 4 AM to watch the U.S. Women's National Soccer Team face Sweden in the Women's World Cup. The deciding penalty that eliminated the U.S., a strike initially swatted down by the goalkeeper, was ruled to have crossed the goal line by a matter of millimeters.
Personal update: I’m writing a book about how AI is impacting women’s sports, as part of the Co-Opting AI Series with the University of California Press! More context here and below:🧵 keepthescore.substack.com/p/on-grappli...
One of the first things anyone learns about facial recognition is that it is often wrong and that it is biased. And yet ICE is using it all day every day to determine legal status & who to detain. And now we have a high-profile example of it being flat out wrong:
www.404media.co/ices-facial-...
We released a new report in partnership with the Center for Tech Responsibility at Brown University on how policymakers and researchers can better analyze AI legislation to protect our civil rights and liberties.
a recent New York State audit of NYC's Local Law 144 — designed to ostensibly regulate potential bias and discrimination in automated employment tools — is fairly scathing in its assessment of how implementation and enforcement of the law is going.
simply put, LL 144 does not work.
You can take action right now, if you live in Massachusetts. Doing this takes less than one minute and will make a significant impact. Do it now. action.aclu.org/send-message...
Headline: ProPublica management wants 100% discretion over when and how to use AI Supplementary text: The organization rejected our proposal that ensures staff will not be replaced by AI and requires labeling AI-generated content
1/ At bargaining yesterday, @propublica.org management said that they should have 100% discretion to replace workers with AI and would not commit to labeling future AI-generated content.
Was really challenging to participate in this, but I think I was able to pen something real, important, and personal to my experience in AI. Remember that GenAI is not *all* of AI nor all of what it should be.
www.technologyreview.com/2025/12/15/1...
A few more I came across today that look interesting!
Chief AI Officer at the State of IL: illinois.jobs2web.com/job/Springfi...
Senior PM at the MBTA: www.governmentjobs.com/careers/mbta...
The Center for Civic Futures has several roles open: digitalharborfoundation.applytojob.com/apply/
And we have lots of open roles at ACLU, including roles on our Technology Team: www.aclu.org/careers/
Started a thread in the other place and bringing it over here - I really think we should be more vocal about the opportunities that lay at the intersection of these two options!
So I'm starting a live thread of new roles as I become aware of them - feel free to add / extend / share :
A reminder that anything recorded on a device like this AI "friend" could be used against you — by hackers, private companies, or the government.
This technology isn't a friend, it's surveillance.
Customs and Border Protection agents searched nearly 15,000 devices from April through June of this year, a nearly 17 percent spike over the previous three-month high in 2022. "The real issue is the chilling effect it has on all travelers."
After @riaclu.bsky.social sued, a judge blocked the Trump administration from imposing ideological restrictions on federal grant recipients who serve domestic violence survivors, LGBTQ youth, and unhoused people.
These organizations can continue their critical work without political interference.
The use of AI shouldn’t come at the expense of our civil liberties.
At the ACLU’s first-ever Civil Rights in the Digital Age AI Summit, we’re considering ways to leverage emerging technology as an asset, while safeguarding our civil rights.
Gerchick et al: Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144's Bias Audits
#FAccT2025
title slide for tutorial presentation reading "public interest tech (pit) clinics as applied sociotechnical pedagogy: practice tutorial, FAccT 2025, Athens, Greece; by Lauren M. Chambers and Diag Davenport, UC Berkeley, June 25, 2025."
getting excited for my #FAccT2025 tutorial - today 5 pm Athens time!
I'm sharing new work with @berkeleyischool.bsky.social's awesome Prof. Diag Davenport: "Public Interest Tech (PIT) Clinics as Applied Sociotechnical Pedagogy."
the idea: bring real-world experience & impact into tech education 💫
❤️❤️❤️
We’re here at the airport as Mahmoud Khalil returns home after over 100 days of being unjustly detained for his advocacy for Palestinian rights.
Welcome home, Mahmoud! ❤️