In which I argue that "AI" as we understand it today is a structurally fascist artifact.
https://tante.cc/2026/04/21/ai-as-a-fascist-artifact/
Posts by Dr. Heather Leslie
The study points to using libraries and visiting museums as bringing these enormous brain health benefits ... who would have thought? Well, librarians and museum folk for a start ... www.theguardian.com/society/2026...
Beyond the hype: Feminists Tackle AI panel
Speakers challenge assumptions about Artificial Intelligence
By Emily Torrance - April 19, 2026
themuse.ca/beyond-the-h...
What does it look like to critically examine AI in the middle school Computer Science classroom?
www.civicsoftechnology.org/blog/introdu...
Also folks don’t seem to appreciate the difference between work and school. At work you are trying to deliver a product effectively and efficiently. Is AI helpful to that end? Maybe? But in school you are TRAINING YOUR MIND. AI is not at all helpful.
Last week, a 20 year-old man threw a molotov cocktail at Sam Altman's mansion; two days later, people fired a gun at it. Earlier that week, someone fired gunshots into an city councilman's house who approved a data center.
Why the AI backlash has turned violent:
I wrote a piece on Substack about the closing of Hampshire College. Such a sad end of an era.
open.substack.com/pub/susandbl...
"In lieu of learning outcomes, we now ask whether students have a warm sense of what learning might feel like and whether they can recall, with confidence, that they took 'chemistry.' If so, we mark that as 'exceeds expectations.'"
Copilot take the wheel
Acknowledging that I only saw this after I got an alert for my name, I agree with it enthusiastically!
I think it’s great to see AI-critical faculty using institutional channels to message to colleagues, students, & staff.
My experience is that these constituencies are open to being persuaded.
"I regret to have to inform you that I cannot review work that is partially processed by a product that in my opinion fails to meet standards of scientific integrity that I need to comply with.."
Thank you for publishing my piece @readywriting.bsky.social where I describe my journey as a writer grappling with #AI. Long story short: I hate it. But I am honored to be featured alongside these incredible educators in this issue of the NTLF onlinelibrary.wiley.com/share/author...
arstechnica.com/ai/2026/04/r...
“In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.”
Yes! Such great points! 🙌 Love the idea of a classroom as an ecosystem 🌳
anyone dealt with something like this at your institution where you track data for assessment in courses that use ungrading methods? All advice welcome! 🙏
Hi #ungrading comrades! I am designing a new EdD in Catholic Social Thought and Practice and we want do have an ungrading format (pass/no pass). The Director of Assessment is unfamiliar with ungrading and says she needs some kind of quantitative scores for assessment (accreditation, etc.). Has …
Starting a new thread to collect critical perspectives on AI, as they are articulated dozens of times every day and appear repeatedly on my timeline. I can't read everything right away, but if, like me, you want to stay up to date, then this might help a bit:
bsky.app/profile/heat...
No it didnt, thats not true, stop this BS.
The AI didnt decide to do shit about fuck. It didnt decide to post to wikipedia. It didnt get mad and write about being rejected on its blog. A person did those things and had an llm do the typing
STOP. DESCRIBING. THESE THINGS. AS IF. THEY. HAVE. AGENCY!
To understand these dynamics of intensified inequality or splitting, we have to shift away from whether ChatGPT can or can’t write good papers or how bad the “hallucinations” are, and toward an analysis of the political economy of higher education and the accelerating role of corporate interests, and their proxies in boards of directors and trustees, in defining the scene of learning. On this understanding, AI products are best understood not as technologies of information (as their backers prefer them to be discussed) but of labor management and the accelerated concentration of wealth.
To my surprise I got invited to be on a panel here called "The Problem of AI" and took the opportunity to try to shift the discussion from boosterism & normalization, however "thoughtful and deliberative," & toward the political economy of higher education
nathankhensley.net/uncategorize...
Innovation or Extraction? AI and the Future of Public Education Weds 4/8, 5-7:15 PM, LIB 121, San Francisco State Our event brings together critical perspectives on how artificial intelligence is reshaping schools and universities. Under the guise of enhanced productivity and innovation, the CSU has partnered with several AI companies, including OpenAI, currently working with the Department of War. Our discussion will feature Alex Hanna, co-author of The AI Con, who will engage with the role of AI and its unintended—or intended—consequences, such as the intensification of labor precarity, privatization, the enabling of war and violence, as well as data and ecological extraction in the classroom and beyond. Speakers: Alex Hanna, Ph.D., Mandana Mohsenzadegan More information and a link to register: https://scholars.my.salesforce-sites.com/event/home/bayareaopedai
Event Weds 4/8, “Innovation or Extraction?: AI and the Future of Public Education,” San Francisco State University.
With @alexhanna.bsky.social, author of “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want,” and an op-ed workshop by @scholars.org.
All welcome, RSVP below!
Never thought of it like that before. Would absolutely fit with the overall attack on critical scholarship from white-male supremacy/neofascism (capitalism).
I am beyond tired of “in the weeds” convos that fail to reckon with the horrifying truth that the AI & Big Tech industries’ will to power is among the proximate causes of the US’s accelerating descent into autocracy. “AI” is both the motivation and the instrument of authoritarian consolidation.
"Let us not ask what AI can do for Oberlin students, but what Oberlin students can do ourselves, while we still have the brain capacity to think on our own."
(Wondering if Luddite clubs might spring up at other schools where students are similarly frustrated with administrators' #AI obsession...)
So, I want to explain something very clearly for you, because it’s important you understand how fucked up shit has become: hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production. In practice, this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code. “How do I know the last part? Because a trusted source told me — and I’ll leave it at that” One might be forgiven for thinking this means that AI has taken a leap in efficacy, but the actual outcomes are a labyrinth of half-functional internal dashboards that measure random user data or convert files, spending hours to save minutes of time at some theoretical point. While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs.
These tools also allow near-incompetent Business Idiot software engineers to do far more damage than they might have in the past. LLM use is relatively-unrestrained (and actively incentivized) in at least one hyperscaler, with just about anybody allowed to spin up their own OpenClaw “AI agent” (read: series of LLMs that allegedly can do stuff with your inbox or Slack for no clear benefit, other than their ability to delete all of your emails). In Meta’s case, this ended up causing a severe security breach: According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum. After doing the analysis, the AI agent posted a response in the discussion forum to the original question, offering advice on the technical issue, according to internal communications. The agent did so without approval from the employee. According to The Information, Meta systems storing large amounts of company and user-related data were accessible to engineers who didn’t have permission to see them, and was marked a sec-1 incident, the second highest level of severity on an internal scale that Meta uses to rank security incidents. The incident follows multiple problems caused at Amazon by its Kiro and Q LLMs. I quote Business Insider’s Eugene Kim: On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors. Amazon's AI tool Q was one of the primary contributors that triggered the event, according to an internal review. On March 5, another outage caused a 99% drop in orders across Amazon's North American marketplaces, resulting in 6.3 million lost orders, one of the internal documents stated. One key factor was …
As this happens, LLMs are actively harming big tech, creating problems for hyperscalers like Meta and Amazon, leaking data and breaking services as non-coders are incentivized to ship product, and LLM use becomes part of performance reviews.
www.wheresyoured.at/the-ai-industry-is-lying-to-you/
long story but i just founded an AI education platform. if you want to join the board or contribute to the blog, hit me up. just submitted a conference talk on "surveillance as care" to ELO. wish me luck!!!! puregenius.education
Not sure if you've read Virginia Eubanks' Automating Inequality, but she synthesizes a lot of scholarship on, as well as surfaces from her own research a lot of issues with "human in the loop" as a concept, and why it ultimately fails in implementation.