A "pause AI" letter that I actually like (and signed)! Because it's not about imagined doomsday scenarios, but rather putting the brakes on the rush to impose synthetic text extruding machines in schools in particular:
actionnetwork.org/petitions/ca...
Posts by Alex Helberg
We dealt with some pretty grim subject matter on this episode, but these analyses always help me to elucidate what premises are worth questioning as we continue to be overwhelmed with both military and AI hype propaganda
🚨 New re:verb episode out this morning! 🚨
On today's show we're taking a look at the auspicious role that so-called "AI" technologies and companies are playing in the military industrial complex.
www.reverbcast.com/podcasts/202...
My friend Kaolin is a queer nb public school teacher in Philly, who recently went through the horror of shepherding their cat Scoot through diabetic ketoacidosis, and is now dealing with the financial stress of vet hospital bills.
These two mean a lot to me - please share and contribute if you can!
As someone who also underwent a cat emergency health scare recently (to the tune of $5k), I know how nightmarish these vet bills can be, and how challenging the decision to move forward is. I think it's critical to help support ppl who care about their animal friends enough to sacrifice like this!
My friend Kaolin is a queer nb public school teacher in Philly, who recently went through the horror of shepherding their cat Scoot through diabetic ketoacidosis, and is now dealing with the financial stress of vet hospital bills.
These two mean a lot to me - please share and contribute if you can!
🎆re:verb is back in 2026 with our first new episode of the year!🎉
This episode hits very close to home for Calvin & Alex:
E106: CMU Coup? (w/ @caevans.bsky.social & @seeshespeak.bsky.social)
www.reverbcast.com/podcasts/202...
To anyone looking to follow along, here's a playlist with as many songs as can be found on YT:
www.youtube.com/playlist?lis...
Following and participating in this has been a great way to keep a little mindfulness practice and experience a bit of communal warmth at the outset of a (so far) rough year. Highly recommend following along on this playlist if you're reading the book:
www.youtube.com/playlist?lis...
Amplifying some key claims from this bold/important article by @ehayot.bsky.social + @mattseybold.bsky.social
1/ …
Just woke up from a Long Winter's Nap™ and I've gotta say I cannot recommend it enough
Came here to say "She breeeeeaaaaks her horses"
Somewhat ashamed to admit this was my first exposure to Smashing Pumpkins as a child www.youtube.com/watch?v=XgU-...
Since we're new over here, we've been re-posting some of our favorite episodes featuring friends & past guests we've had on the show who have found a home on BlueSky.
It's also a good starter pack for anyone new to the show!
Here's a master 🧵...
the fact that dealing with rampant lying and its much worse downstream consequences involves cultivating values rather than just imposing formal frameworks or rules (legal or otherwise) is a tough pill to swallow for some. but it is indeed medicine
Saw this post and immediately started hearing "Moootherrrrrr..... Faaaatherrrrrrr"
This has been written about recently - it's called "workslop" hbr.org/2025/09/ai-g...
Everyone just keep blocking this garbage.
bsky.app/profile/did:...
Any excuse to keep showing the Pepe documentary in my intro digital rhetoric classes
vimeo.com/ondemand/fee...
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
"We are each our own devil, and we make this world our hell" also feels apt
The broligarchy is rushing to tear through the administration centers of our government. Their dream is to replace large amounts of administration with AI in the name of "efficiency". This is despite daily reminders that LLMs regularly make stupid, hard-to-detect errors.
1/6
There is another issue that the broligarchy isn't attending to, or are deliberately ignoring: Accountability
There is a legal entity to whom errors can be attributed, penalties can be assigned if necessary, and corrections made.
AI systems cannot be held accountable.
3/6
By "miss," I think we're referring to the fact that this is a very silly thing to get upset about considering what else is happening in the AI / tech oligarch world right now
Really like your work, Ed, but I think this is very much beside the point rn
Don't know why people are dedicating so much time to criticizing this - seems like a weird distraction from far worse things. Let this scholarly trend away from talking about phantasms like AGI take shape, someone else will publish a paper with prose more to your liking someday
Idk man, picking on a group of researchers because of their prose style feels like an inconsequential soft target compared to the actual morons who believe in AGI currently ransacking the federal government and feeding sensitive data to AI models
Ah memories
How hard is it to not dance on the graves of disabled people singing "told you so," and instead do some introspection on the real tragedy: that the opposition party wasn't able to mount a convincing argument that there was a reliable alternative?
One thing that hasn't changed is the rhetorical front of "national security" discourse that's being used to justify the ban, which we interpret as an attempt to obscure the realpolitik of consolidating tech & communications infrastructure within the US economy (and under its legal jurisdiction)