We asked people who lived in homeless encampments that were cleared out in city “sweeps” to write about what object was the hardest for them to lose.
“They took my baby pictures and my moms obituaries,” a man in California wrote.
(Published Dec. 2024)
Posts by Dominic DiFranzo
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
“Google has created a misinformation crisis. Studies have shown that people tend to trust what AI tells them without question…Another experiment found that users still listened to AI when it gave them the wrong answer nearly 80% of the time—a grim trend the researchers dubbed “cognitive surrender.”
In Chapter 13 of my book Copaganda called The Big Deception, I explained why falsifying intentions of those in power is always key to propaganda. And why subtle claims like this are such effective propaganda, particularly against professional class liberal intellectuals: bsky.app/profile/equa...
Young adult happiness is falling. Percent of young people who are pretty or very happy has dropped from 88% to 76% over the last 15 years after having held fairly steady between 1980 and 2010
My hottest “I have no data to back this up” take is that phones aren’t making young people less happy. What’s making them less happy is living in a world where everyone is having their reality defined by recommendation algorithms that can only care about whether something keeps you watching.
Sam Altman has a new plan to make money from generative AI: he wants intelligence to be treated like water or electricity — and we’ll all have to pay him for it.
His chatbots are degrading people’s ability to retain information and think critically. Now he wants to sell smarts back to us.
Thoughtful comments from several scholars about what SCORE findings do and do not mean in this Chronicle piece.
[Soft paywall - just need to create a free account.]
www.chronicle.com/article/lots...
This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, until users had to interact with chatbots, then the interface led to confusion & worse answers
My post: www.oneusefulthing.org/p/claude-dis...
So many people assume that story generation has been solved for a while. Mythos is showing one of the typical failure modes of LLM story generation: taking refuge in prolonged banter between characters that never advances the plot or understanding of the characters.
in america, our legislative branch is on vacation right now. that’s not a joke or a euphemism, they’re on spring break. not a single person in america believes that they’ll do anything.
Stocks soar on 2-week postponement of Armageddon.
lmao this one is incredible. “Head of product at X condemns quote-tweeting” my brother in Jira you’re the shift manager at the quote tweet factory
yeah it's so rare for somebody to *checks notes* desire endless admiration and positive regard, while also being *checks notes* dishonest and even dangerous. that's so rare. almost never occurs.
🚨NEW DISINFO PAPER🚨 TLDR; disinformation circulates as narratives, not false facts. This paper took five years (!!!) and a rotating cast of collaborators and GRAs. Our case studies include the pee tape, and we have an entire appendix justifying that. www.tandfonline.com/doi/full/10....
This is not lets-propose-this-and-see-what-Congress-says. Their plan is more like USAID: eliminate before Congress can weigh in.
We are the frogs slowly being boiled. Can you imagine what you would think to see a US president writing this tweet ten years ago?
I Work Very Hard, And I Would Like To Try Cake By A Horse Hello. I am a horse. I work very hard at my job of being a horse. When humans say move the heavy thing, I move the heavy thing. When humans sit on top of me and pull on my head, I carry them where they want to go. The main food the humans give me is hay and oats. But I am thinking it would be nice to have a different food. I am thinking I would like to try cake. Yes, yes. Cake. I know all about it. When humans eat cake, it is in glad times. It is the food for a celebration, such as when a woman becomes 47. I have seen cake on the Fourth of July. When humans have a cake, they stand around it and clap hands and smile and say happy birthday at each other. Sometimes there are beautiful markings on a cake, such as balloons or a pink shape. Sometimes the top of a cake is on fire and a boy must blow on the fire with mouth wind. This is the scariest cake. I do not want this kind. But I will eat any other cake. Any cake that is not the fire cake that tries to kill the boy. Please understand: I do not get money for doing work. I do not get to go inside the house. All I am either doing my horse job or standing in my pen or eating food off the floor. I always do these things. But I have never once gotten cake and I would like it very much. I have noticed that human children get to eat cake. But I am bigger than the children. I am more helpful to the farm. Children do not move the heavy things like me or let anyone ride on them. And yet they get cake. Maybe the humans will realize this. Maybe they will say, "You know who deserves cake? That horse. That horse whose back we are always on." Every day I dream about what it will be like if I get to eat cake. Here is what will happen. First, I will walk to the cake and putt my nose at it like hrrfff to make and stomping my hooves to make sure it is not a snake. Then I will trot in a circle to show that I am a horse and I am large. After that, I will nuzzle the cake to …
The horse op-ed is an instant classic. I can't tell you how much joy this piece gives me.
It should be taught in every introductory writing class in no small part because the horse arguments are so compelling. "I have noticed that human children get to eat cake. But I am bigger than the children."
“…the researchers argue that AI systems have given rise to a categorically different form of “cognitive surrender” in which users provide “minimal internal engagement” and accept an AI’s reasoning wholesale without oversight or verification.”
"The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding."
Fundraiser for librarian Luanne James, who stood up to her rightwing library board that demanded she remove kids books with any trace of LBGTQ+ characters or content.
"I will not comply."
They sacked her and cops escorted her out.
She's a hero. This banned author just donated.
Oh so now copyright matters.
Disturbing.
This study suggests that just one encounter with a syncophantic chatbot tends to "erode prosocial motivations," even for sophisticated users.
“Sycophancy was present across all the chatbots they tested, & the bots frequently told users that their actions or beliefs were justified in cases where the user was acting deceptively, doing something illegal, or engaging in otherwise harmful or abusive behavior.”🧪
They’d rather the country be poorer and whiter than rich and diverse. Of course when the pie shrinks, the rich will demand the same amount and will tell you the crumbs you’re left with are because of immigrants or trans people or DEI. bsky.app/profile/apne...
Updated versions of my misinformation and experiments course syllabi now posted:
Political Misinformation and Conspiracy Theories
sites.dartmouth.edu/nyhan/files/...
Experiments in Politics sites.dartmouth.edu/nyhan/files/...
roon @tszzl the private sector has been remaking its own versions of NIH, ARPA etc as these public science institutions have seen structural decline and defunding and it will be supercharged by the funding NPV of machine intelligence and its firepower at allocation decisions
This is only true for people who understand neither science nor economics.
The NIH budget for this year is FIFTY times larger than OpenAI’s $1B pledge.
The foundation of US science & innovation is public funding. The private sector cannot replace it.
US science is being killed
“The study found that, on average, AI chatbots affirmed a user’s actions 49 percent more often than other humans did, including in queries involving deception, illegal or socially irresponsible conduct, and other harmful behaviors.”
Just finished talking to students about why the humanities are even more important in this age of AI.
Efficiency isn't always the most important goal. Reading carefully and knowing how to think critically is valuable on its own--even if that takes longer.
A line graph of the number of NSF awards in fiscal 2026 compared to fiscal years 2021-2025. The fiscal year 2026 is well below the other curves and increasing only very slowly.
NSF Update through March 13, 2026
1/2
I don't think people fully appreciate how apocalyptic things are for US science. I haven't had any new funding since 2024, but I'm still ok since typical grants are for three years. This means next year I will be completely out of funding and will have to fire everyone in the lab. It's not great.