Advertisement · 728 × 90

Posts by Dominic DiFranzo

Preview
“I Have Lost Everything”: The Toll of Cities’ Homeless Sweeps Cities often take belongings — including important documents and irreplaceable mementos — when they conduct sweeps of homeless encampments. ProPublica gave notecards to people across the country so th...

We asked people who lived in homeless encampments that were cleared out in city “sweeps” to write about what object was the hardest for them to lose.

“They took my baby pictures and my moms obituaries,” a man in California wrote.

(Published Dec. 2024)

12 hours ago 866 385 30 24
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

7 months ago 3942 1972 111 406

“Google has created a misinformation crisis. Studies have shown that people tend to trust what AI tells them without question…Another experiment found that users still listened to AI when it gave them the wrong answer nearly 80% of the time—a grim trend the researchers dubbed “cognitive surrender.”

1 week ago 511 273 16 15

In Chapter 13 of my book Copaganda called The Big Deception, I explained why falsifying intentions of those in power is always key to propaganda. And why subtle claims like this are such effective propaganda, particularly against professional class liberal intellectuals: bsky.app/profile/equa...

1 week ago 253 57 5 2
Young adult happiness is falling. Percent of young people who are pretty or very happy has dropped from 88% to 76% over the last 15 years after having held fairly steady between 1980 and 2010

Young adult happiness is falling. Percent of young people who are pretty or very happy has dropped from 88% to 76% over the last 15 years after having held fairly steady between 1980 and 2010

My hottest “I have no data to back this up” take is that phones aren’t making young people less happy. What’s making them less happy is living in a world where everyone is having their reality defined by recommendation algorithms that can only care about whether something keeps you watching.

1 week ago 10642 1532 490 334
Preview
Make ‘em dumb, sell ‘em smarts Sam Altman wants intelligence to be a utility that you pay him for

Sam Altman has a new plan to make money from generative AI: he wants intelligence to be treated like water or electricity — and we’ll all have to pay him for it.

His chatbots are degrading people’s ability to retain information and think critically. Now he wants to sell smarts back to us.

1 week ago 982 348 39 167
Preview
Lots of Social Science Didn’t Replicate. Does That Mean It’s Bunk? Scholars are debating the results of an effort to assess hundreds of papers’ credibility. Where some see failure and cause for urgent reform, others see reason for hope.

Thoughtful comments from several scholars about what SCORE findings do and do not mean in this Chronicle piece.

[Soft paywall - just need to create a free account.]

www.chronicle.com/article/lots...

1 week ago 8 2 0 0
Post image Post image

This new Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. AI did a good job diagnosing medical issues, until users had to interact with chatbots, then the interface led to confusion & worse answers

My post: www.oneusefulthing.org/p/claude-dis...

2 weeks ago 46 8 3 3

So many people assume that story generation has been solved for a while. Mythos is showing one of the typical failure modes of LLM story generation: taking refuge in prolonged banter between characters that never advances the plot or understanding of the characters.

1 week ago 30 2 3 0

in america, our legislative branch is on vacation right now. that’s not a joke or a euphemism, they’re on spring break. not a single person in america believes that they’ll do anything.

1 week ago 325 90 4 0
Advertisement

Stocks soar on 2-week postponement of Armageddon.

1 week ago 6974 888 217 119

lmao this one is incredible. “Head of product at X condemns quote-tweeting” my brother in Jira you’re the shift manager at the quote tweet factory

2 weeks ago 4163 546 47 12

yeah it's so rare for somebody to *checks notes* desire endless admiration and positive regard, while also being *checks notes* dishonest and even dangerous. that's so rare. almost never occurs.

2 weeks ago 25 1 5 0
Preview
Disinformation as Cultural Narrative: Conceptualizing Disinformation as Cross-Platform, Identity-Affirming, Cathartic Stories Rather than framing disinformation as false facts which can be countered by true facts, we propose a model of disinformation as narrative by tracing three case studies of successful disinformation ...

🚨NEW DISINFO PAPER🚨 TLDR; disinformation circulates as narratives, not false facts. This paper took five years (!!!) and a rotating cast of collaborators and GRAs. Our case studies include the pee tape, and we have an entire appendix justifying that. www.tandfonline.com/doi/full/10....

2 weeks ago 392 170 17 31

This is not lets-propose-this-and-see-what-Congress-says. Their plan is more like USAID: eliminate before Congress can weigh in.

2 weeks ago 690 254 9 7

We are the frogs slowly being boiled. Can you imagine what you would think to see a US president writing this tweet ten years ago?

2 weeks ago 311 46 21 3
I Work Very Hard, And I Would Like To Try Cake

By A Horse

Hello. I am a horse. I work very hard at my job of being a horse. When humans say move the heavy thing, I move the heavy thing. When humans sit on top of me and pull on my head, I carry them where they want to go. The main food the humans give me is hay and oats. But I am thinking it would be nice to have a different food.

I am thinking I would like to try cake.

Yes, yes. Cake. I know all about it. When humans eat cake, it is in glad times. It is the food for a celebration, such as when a woman becomes 47. I have seen cake on the Fourth of July. When humans have a cake, they stand around it and clap hands and smile and say happy birthday at each other. Sometimes there are beautiful markings on a cake, such as balloons or a pink shape.

Sometimes the top of a cake is on fire and a boy must blow on the fire with mouth wind. This is the scariest cake. I do not want this kind. But I will eat any other cake. Any cake that is not the fire cake that tries to kill the boy.

Please understand: I do not get money for doing work. I do not get to go inside the house. All I am either doing my horse job or standing in my pen or eating food off the floor. I always do these things. But I have never once gotten cake and I would like it very much.

I have noticed that human children get to eat cake. But I am bigger than the children. I am more helpful to the farm. Children do not move the heavy things like me or let anyone ride on them. And yet they get cake. Maybe the humans will realize this. Maybe they will say, "You  know who deserves cake? That horse. That horse whose back we are always on."

Every day I dream about what it will be like if I get to eat cake. Here is what will happen. First, I will walk to the cake and putt my nose at it like hrrfff to make and stomping my hooves to make sure it is not a snake. Then I will trot in a circle to show that I am a horse and I am large. After that, I will nuzzle the cake to …

I Work Very Hard, And I Would Like To Try Cake By A Horse Hello. I am a horse. I work very hard at my job of being a horse. When humans say move the heavy thing, I move the heavy thing. When humans sit on top of me and pull on my head, I carry them where they want to go. The main food the humans give me is hay and oats. But I am thinking it would be nice to have a different food. I am thinking I would like to try cake. Yes, yes. Cake. I know all about it. When humans eat cake, it is in glad times. It is the food for a celebration, such as when a woman becomes 47. I have seen cake on the Fourth of July. When humans have a cake, they stand around it and clap hands and smile and say happy birthday at each other. Sometimes there are beautiful markings on a cake, such as balloons or a pink shape. Sometimes the top of a cake is on fire and a boy must blow on the fire with mouth wind. This is the scariest cake. I do not want this kind. But I will eat any other cake. Any cake that is not the fire cake that tries to kill the boy. Please understand: I do not get money for doing work. I do not get to go inside the house. All I am either doing my horse job or standing in my pen or eating food off the floor. I always do these things. But I have never once gotten cake and I would like it very much. I have noticed that human children get to eat cake. But I am bigger than the children. I am more helpful to the farm. Children do not move the heavy things like me or let anyone ride on them. And yet they get cake. Maybe the humans will realize this. Maybe they will say, "You know who deserves cake? That horse. That horse whose back we are always on." Every day I dream about what it will be like if I get to eat cake. Here is what will happen. First, I will walk to the cake and putt my nose at it like hrrfff to make and stomping my hooves to make sure it is not a snake. Then I will trot in a circle to show that I am a horse and I am large. After that, I will nuzzle the cake to …

The horse op-ed is an instant classic. I can't tell you how much joy this piece gives me.

It should be taught in every introductory writing class in no small part because the horse arguments are so compelling. "I have noticed that human children get to eat cake. But I am bigger than the children."

2 weeks ago 16924 5834 259 695
Advertisement
Preview
"Cognitive surrender" leads AI users to abandon logical thinking, research finds Experiments show large majorities uncritically accepting "faulty" AI answers.

“…the researchers argue that AI systems have given rise to a categorically different form of “cognitive surrender” in which users provide “minimal internal engagement” and accept an AI’s reasoning wholesale without oversight or verification.”

2 weeks ago 1158 418 35 129

"The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding."

2 weeks ago 2047 648 32 35
Preview
Donate to Help Luanne James in Her Time of Need, organized by Dianne M On March 30th, the Rutherford County Library Board terminated Luanne James from her pos… Dianne M needs your support for Help Luanne James in Her Time of Need

Fundraiser for librarian Luanne James, who stood up to her rightwing library board that demanded she remove kids books with any trace of LBGTQ+ characters or content.

"I will not comply."

They sacked her and cops escorted her out.

She's a hero. This banned author just donated.

2 weeks ago 6075 4136 20 40

Oh so now copyright matters.

2 weeks ago 6032 2052 5 18

Disturbing.

This study suggests that just one encounter with a syncophantic chatbot tends to "erode prosocial motivations," even for sophisticated users.

3 weeks ago 132 70 1 2

“Sycophancy was present across all the chatbots they tested, & the bots frequently told users that their actions or beliefs were justified in cases where the user was acting deceptively, doing something illegal, or engaging in otherwise harmful or abusive behavior.”🧪

3 weeks ago 208 127 5 13

They’d rather the country be poorer and whiter than rich and diverse. Of course when the pie shrinks, the rich will demand the same amount and will tell you the crumbs you’re left with are because of immigrants or trans people or DEI. bsky.app/profile/apne...

3 weeks ago 6206 1874 85 53
Post image Post image

Updated versions of my misinformation and experiments course syllabi now posted:

Political Misinformation and Conspiracy Theories
sites.dartmouth.edu/nyhan/files/...

Experiments in Politics sites.dartmouth.edu/nyhan/files/...

3 weeks ago 81 16 0 1
Advertisement
roon @tszzl
the private sector has been remaking its own versions of NIH, ARPA etc as these public science institutions have seen structural decline and defunding and it will be supercharged by the funding NPV of machine intelligence and its firepower at allocation decisions

roon @tszzl the private sector has been remaking its own versions of NIH, ARPA etc as these public science institutions have seen structural decline and defunding and it will be supercharged by the funding NPV of machine intelligence and its firepower at allocation decisions

This is only true for people who understand neither science nor economics.

The NIH budget for this year is FIFTY times larger than OpenAI’s $1B pledge.

The foundation of US science & innovation is public funding. The private sector cannot replace it.
US science is being killed

3 weeks ago 231 66 4 4
Preview
AI chatbots are probably giving you bad advice, new study finds - The Boston Globe Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviors, according to...

“The study found that, on average, AI chatbots affirmed a user’s actions 49 percent more often than other humans did, including in queries involving deception, illegal or socially irresponsible conduct, and other harmful behaviors.”

3 weeks ago 189 87 6 30

Just finished talking to students about why the humanities are even more important in this age of AI.

Efficiency isn't always the most important goal. Reading carefully and knowing how to think critically is valuable on its own--even if that takes longer.

3 weeks ago 204 46 4 5
A line graph of the number of NSF awards in fiscal 2026 compared to fiscal years 2021-2025. The fiscal year 2026 is well below the other curves and increasing only very slowly.

A line graph of the number of NSF awards in fiscal 2026 compared to fiscal years 2021-2025. The fiscal year 2026 is well below the other curves and increasing only very slowly.

NSF Update through March 13, 2026

1/2

1 month ago 794 388 27 133

I don't think people fully appreciate how apocalyptic things are for US science. I haven't had any new funding since 2024, but I'm still ok since typical grants are for three years. This means next year I will be completely out of funding and will have to fire everyone in the lab. It's not great.

3 weeks ago 5375 2390 43 79