Advertisement · 728 × 90

Posts by Dr. Heather Leslie

Preview
AI as a Fascist Artifact (This is a bit of a merger of two talks I recently gave about fascism and AI. One was in German at the Cables Of Resistance conference, one in English at the Milton Wolf Seminar on Media and Diplomacy. I added some shots of the slides I used as a structure for the text which […]

In which I argue that "AI" as we understand it today is a structurally fascist artifact.

https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/

1 day ago 509 190 10 17
Preview
Reading and writing can lower dementia risk by almost 40%, study finds Cognitive health in later life is ‘strongly influenced’ by lifelong exposure to intellectually stimulating environments, say researchers

The study points to using libraries and visiting museums as bringing these enormous brain health benefits ... who would have thought? Well, librarians and museum folk for a start ... www.theguardian.com/society/2026...

2 months ago 640 299 13 19
Beyond the hype: Feminists Tackle AI panel | The Muse

Beyond the hype: Feminists Tackle AI panel
Speakers challenge assumptions about Artificial Intelligence

By Emily Torrance - April 19, 2026

themuse.ca/beyond-the-h...

2 days ago 11 4 1 0
Preview
Introducing Middle School Students to "The Secret Ghost Workers of 'Artificial Intelligence'" — Civics of Technology Upcoming Book Clubs: We’re reading The Digital Delusion: How Classroom Technology Harms our Kids’ Learning - And How to Help Them Thrive Again by Jared Cooney Horvath. Join us on Wednesday, Ap...

What does it look like to critically examine AI in the middle school Computer Science classroom?

www.civicsoftechnology.org/blog/introdu...

3 days ago 4 5 0 0

Also folks don’t seem to appreciate the difference between work and school. At work you are trying to deliver a product effectively and efficiently. Is AI helpful to that end? Maybe? But in school you are TRAINING YOUR MIND. AI is not at all helpful.

5 days ago 48 3 2 0
Preview
Don't want your AI slop It's ruining education, hurting the labor market, undermining artists, and heating the planet. What's not to love?

I've gone full old-man-yelling-at-cloud on AI

6 days ago 162 40 4 6
Preview
Why the AI backlash has turned violent And why it's probably only going to get worse from here.

Last week, a 20 year-old man threw a molotov cocktail at Sam Altman's mansion; two days later, people fired a gun at it. Earlier that week, someone fired gunshots into an city councilman's house who approved a data center.

Why the AI backlash has turned violent:

1 week ago 1602 534 54 117
Advertisement
Preview
On Hearing of Hampshire College’s Closing ….End of a Hopeful Progressive Era?

I wrote a piece on Substack about the closing of Hampshire College. Such a sad end of an era.

open.substack.com/pub/susandbl...

1 week ago 2 2 1 0
Preview
The Next Innovation in Higher Education: Vibe-Teaching™ As the associate vice provost for the Office of Asynchronous Online Courses for Student-Centered High-Impact Learning (OAOCSCHIL, an office we crea...

"In lieu of learning outcomes, we now ask whether students have a warm sense of what learning might feel like and whether they can recall, with confidence, that they took 'chemistry.' If so, we mark that as 'exceeds expectations.'"

1 week ago 95 25 2 16

Copilot take the wheel

1 week ago 5 0 0 0
Preview
4: Creating AI-Free Learning Spaces - Center for Excellence in Teaching and Learning - Oakland University “There’s a broad and increasing sense from students that something is being stolen from them,” observed Elmira College English professor Matt Seybold, in a recent article in The Guardian on AI’s impact on student learning. That article showcases an emerging consensus in many humanities fields that a host of negative consequences follow from the uncritical embrace on college campuses of AI (which for our purposes here means Large Language Models). Such a critical perspective on AI is sorely needed at OU, where AI’s adoption is often taken to be both inevitable and beneficial. We reject both assumptions. In what follows, we briefly review the body of AI-skeptical research and writing and then share how we’ve created AI-free spaces for our students.

Acknowledging that I only saw this after I got an alert for my name, I agree with it enthusiastically!

I think it’s great to see AI-critical faculty using institutional channels to message to colleagues, students, & staff.

My experience is that these constituencies are open to being persuaded.

1 week ago 25 8 0 1

"I regret to have to inform you that I cannot review work that is partially processed by a product that in my opinion fails to meet standards of scientific integrity that I need to comply with.."

1 week ago 199 62 1 3

Thank you for publishing my piece @readywriting.bsky.social where I describe my journey as a writer grappling with #AI. Long story short: I hate it. But I am honored to be featured alongside these incredible educators in this issue of the NTLF onlinelibrary.wiley.com/share/author...

2 weeks ago 1 1 0 0
Preview
"Cognitive surrender" leads AI users to abandon logical thinking, research finds Experiments show large majorities uncritically accepting "faulty" AI answers.

arstechnica.com/ai/2026/04/r...

“In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.”

2 weeks ago 0 1 0 0

Yes! Such great points! 🙌 Love the idea of a classroom as an ecosystem 🌳

2 weeks ago 0 0 0 0

anyone dealt with something like this at your institution where you track data for assessment in courses that use ungrading methods? All advice welcome! 🙏

2 weeks ago 0 0 0 0
Advertisement

Hi #ungrading comrades! I am designing a new EdD in Catholic Social Thought and Practice and we want do have an ungrading format (pass/no pass). The Director of Assessment is unfamiliar with ungrading and says she needs some kind of quantitative scores for assessment (accreditation, etc.). Has …

2 weeks ago 0 0 1 0

Starting a new thread to collect critical perspectives on AI, as they are articulated dozens of times every day and appear repeatedly on my timeline. I can't read everything right away, but if, like me, you want to stay up to date, then this might help a bit:

7 months ago 253 99 206 9

bsky.app/profile/heat...

3 weeks ago 1 0 0 0

No it didnt, thats not true, stop this BS.

The AI didnt decide to do shit about fuck. It didnt decide to post to wikipedia. It didnt get mad and write about being rejected on its blog. A person did those things and had an llm do the typing

STOP. DESCRIBING. THESE THINGS. AS IF. THEY. HAVE. AGENCY!

3 weeks ago 71 18 3 1
To understand these dynamics of intensified inequality or splitting, we have to shift away from whether ChatGPT can or can’t write good papers or how bad the “hallucinations” are, and toward an analysis of the political economy of higher education and the accelerating role of corporate interests, and their proxies in boards of directors and trustees, in defining the scene of learning. On this understanding, AI products are best understood not as technologies of information (as their backers prefer them to be discussed) but of labor management and the accelerated concentration of wealth.

To understand these dynamics of intensified inequality or splitting, we have to shift away from whether ChatGPT can or can’t write good papers or how bad the “hallucinations” are, and toward an analysis of the political economy of higher education and the accelerating role of corporate interests, and their proxies in boards of directors and trustees, in defining the scene of learning. On this understanding, AI products are best understood not as technologies of information (as their backers prefer them to be discussed) but of labor management and the accelerated concentration of wealth.

To my surprise I got invited to be on a panel here called "The Problem of AI" and took the opportunity to try to shift the discussion from boosterism & normalization, however "thoughtful and deliberative," & toward the political economy of higher education

nathankhensley.net/uncategorize...

3 weeks ago 101 21 5 9
Innovation or Extraction? AI and the Future of Public Education
Weds 4/8, 5-7:15 PM, LIB 121, San Francisco State

Our event brings together critical perspectives on how artificial intelligence is reshaping schools and universities. Under the guise of enhanced productivity and innovation, the CSU has partnered with several AI companies, including OpenAI, currently working with the Department of War. Our discussion will feature Alex Hanna, co-author of The AI Con, who will engage with the role of AI and its unintended—or intended—consequences, such as the intensification of labor precarity, privatization, the enabling of war and violence, as well as data and ecological extraction in the classroom and beyond.

Speakers: Alex Hanna, Ph.D., Mandana Mohsenzadegan

More information and a link to register: https://scholars.my.salesforce-sites.com/event/home/bayareaopedai

Innovation or Extraction? AI and the Future of Public Education Weds 4/8, 5-7:15 PM, LIB 121, San Francisco State Our event brings together critical perspectives on how artificial intelligence is reshaping schools and universities. Under the guise of enhanced productivity and innovation, the CSU has partnered with several AI companies, including OpenAI, currently working with the Department of War. Our discussion will feature Alex Hanna, co-author of The AI Con, who will engage with the role of AI and its unintended—or intended—consequences, such as the intensification of labor precarity, privatization, the enabling of war and violence, as well as data and ecological extraction in the classroom and beyond. Speakers: Alex Hanna, Ph.D., Mandana Mohsenzadegan More information and a link to register: https://scholars.my.salesforce-sites.com/event/home/bayareaopedai

Event Weds 4/8, “Innovation or Extraction?: AI and the Future of Public Education,” San Francisco State University.

With @alexhanna.bsky.social, author of “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want,” and an op-ed workshop by @scholars.org.

All welcome, RSVP below!

3 weeks ago 20 6 1 0

Never thought of it like that before. Would absolutely fit with the overall attack on critical scholarship from white-male supremacy/neofascism (capitalism).

3 weeks ago 9 2 1 0
Advertisement

I am beyond tired of “in the weeds” convos that fail to reckon with the horrifying truth that the AI & Big Tech industries’ will to power is among the proximate causes of the US’s accelerating descent into autocracy. “AI” is both the motivation and the instrument of authoritarian consolidation.

3 weeks ago 143 42 5 2

"Let us not ask what AI can do for Oberlin students, but what Oberlin students can do ourselves, while we still have the brain capacity to think on our own."

(Wondering if Luddite clubs might spring up at other schools where students are similarly frustrated with administrators' #AI obsession...)

3 weeks ago 469 113 9 3

So, I want to explain something very clearly for you, because it’s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production. 

In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code. 

“How do I know the last part? Because a trusted source told me — and I’ll leave it at that”

One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs.

So, I want to explain something very clearly for you, because it’s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production. In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code. “How do I know the last part? Because a trusted source told me — and I’ll leave it at that” One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs.

These tools also allow near-incompetent Business Idiot software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw “AI agent” (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, other than their ability to delete all of your emails). In Meta’s case, this ended up causing a severe security breach:

According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum. After doing the analysis, the AI agent posted a response in the discussion forum to the original question, offering advice on the technical issue, according to internal communications. The agent did so without approval from the employee.
According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn’t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents. 

The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. I quote Business Insider’s Eugene Kim: 

On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors. Amazon's AI tool Q was one of the primary contributors that triggered the event, according to an internal review.

On March 5, another outage caused a 99% drop in orders across Amazon's North American marketplaces, resulting in 6.3 million lost orders, one of the internal documents stated. One key factor was …

These tools also allow near-incompetent Business Idiot software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw “AI agent” (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, other than their ability to delete all of your emails). In Meta’s case, this ended up causing a severe security breach: According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum. After doing the analysis, the AI agent posted a response in the discussion forum to the original question, offering advice on the technical issue, according to internal communications. The agent did so without approval from the employee. According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn’t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents. The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. I quote Business Insider’s Eugene Kim: On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors. Amazon's AI tool Q was one of the primary contributors that triggered the event, according to an internal review. On March 5, another outage caused a 99% drop in orders across Amazon's North American marketplaces, resulting in 6.3 million lost orders, one of the internal documents stated. One key factor was …

As this happens, LLMs are actively harming big tech, creating problems for hyperscalers like Meta and Amazon, leaking data and breaking services as non-coders are incentivized to ship product, and LLM use becomes part of performance reviews.

www.wheresyoured.at/the-ai-industry-is-lying-to-you/

4 weeks ago 76 9 2 0
Preview
An Open Letter to Georgetown Students, In Response to Recent Announcements about "Generative AI" An Open Letter to Georgetown Students, In Response to Recent Announcements by the University about “Generative AI” Image source: Bibliothèque nationale de France Dear students, As you know, in …

See also medium.com/center-on-pr...

4 weeks ago 39 9 1 0
Preview
Spring is for new beginnings… The spring equinox is here which marks the moment the sun is directly overhead at the equator and the day and night are almost equal in…

Happy 🌱Spring 🌱 medium.com/@heatherlesl...

1 month ago 0 0 0 0
Preview
The World's First Fully Automated Genius System Experience the future of education with PureGenius - an AI-powered learning platform that unlocks every student's potential. Sign up for early access today.

long story but i just founded an AI education platform. if you want to join the board or contribute to the blog, hit me up. just submitted a conference talk on "surveillance as care" to ELO. wish me luck!!!! puregenius.education

1 month ago 187 48 28 55

Not sure if you've read Virginia Eubanks' Automating Inequality, but she synthesizes a lot of scholarship on, as well as surfaces from her own research a lot of issues with "human in the loop" as a concept, and why it ultimately fails in implementation.

1 month ago 7 1 1 1