Advertisement · 728 × 90

Posts by Phil Choong

moby-dick, his eyes enormous: from hell's heart you STAB at moby? for hate's sake you spit your last BREATH at moby? oh! oh! the great shroud of the sea for ahab! the great shroud of the sea rolling on as it rolled five thousand years ago!!!!

3 weeks ago 5521 2010 43 33
Resisting Dehumanization in the Age of "AI": The View from the Humanities, Emily M. Bender
Resisting Dehumanization in the Age of "AI": The View from the Humanities, Emily M. Bender YouTube video by Simpson Center

#TalkAboutHumanities

We need scholars across the humanities, because these are the fields where we study what it is to be human, to inhabit different identities, and to connect with each other, to be human together.

youtu.be/T7Lc6QNxolQ...

5 days ago 62 18 0 1

Appeal to romance readers! My gift to Romancelandia is this:

5 days ago 60 20 2 1
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

7 months ago 3944 1974 111 406

In my opinion, the main task of exploring propaganda is not demonstrating that it works, it's convincing people who are assured of their own intelligence & free will that it could work on people like them

1 week ago 402 54 4 2

Games are a unique art form. They are direct instructions, with an audience primed to follow them. They are unlike music. Unlike graphic arts. Unlike fiction.

Game design is about how instructions focus us, inspire us, outrage us, motivate us, reveal us, help us connect to each other.

1 week ago 211 31 9 1

and it is still infinity times better to be the kind of person who loves monarch butterflies, and the mosaic tilework on an ancient mosque, and kids whose names you'll never know running around a field kicking a soccer ball, than the kind who views them with indifference, but it is also painful

2 weeks ago 279 42 2 0

"To Asia, Africa and Oceania, we are looking back at you. We hear you can look up and see the moon right now. We see you, too. Ultimately, we will always choose Earth. We will always choose each other.” Artemis II Astronaut Christina Koch 🌕🌎

2 weeks ago 25 7 0 0

Red States: We're going to cut all the programs that don't support our theology
Blue States: We're going to cut the same programs, but because they don't support the business school

3 weeks ago 1377 429 36 28

this is art

3 weeks ago 2386 534 62 35
Advertisement
The Master's Tools Will Never Dismantle the Master's House - Oct. 29, 1979 - Archives of Women's Political Communication

Audre Lorde's speech can be read here awpc.cattcenter.iastate.edu/communicatio...

3 weeks ago 15 3 0 0
Preview
On The Enshittification of Audre Lorde: "The Master's Tools" in Tech Discourse 🖼️Cover Photo: Train at the Nairobi terminus of the Mombasa–Nairobi Standard Gauge Railway. It runs parallel to the Uganda Railway that was completed in 1901. The first fare-paying passengers boarded ...

This is an essential read, and the closing on credentialing versus engagement and "centering the people for whom the stakes are highest" in particular. (H/T @peachfleurr.bsky.social @robin.berjon.com)

3 weeks ago 47 20 1 5

wouldn't it be funny if EVERYONE blocked this Attie AI account @ bsky.app/profile/atti... before they could do anything with it

3 weeks ago 15300 12721 543 418

As one of the authors of this thought I’d share a bit about how we got here and how you can do what we’ve done at you institution.

Over the two years I’ve been at the University of Edinburgh I’ve grown increasingly concerned by fellow academics uncritically using LLMs, especially OpenAIs’s ChatGPT.

1 month ago 160 85 3 5

The push to make syllabi “public” is not about transparency. It’s about creating conditions for outside intimidation and manufacturing pretexts for censorship.

1 month ago 42 15 1 1

One thing that AI evangelists seem to assume, particularly legal AI evangelists assume, is that the thinking and analysis is quick, and the writing is just tedious busywork that slows us down.

But the writing IS the thinking and analysis. You work out the thinking and analysis by writing it down.

1 month ago 4548 1050 103 187

this is so perfect as a phrasing

"AI asks that you buy into the idea that more data means being closer to The Truth."

I also explain this and take pains to really hammer home this is NOT EVEN TRUE in science, more data is NOT a better theory or account!

1 month ago 35 11 0 0
Preview
The Ends of AI Sycophancy and psychosis

I wrote about AI --
but, really, about the close proximity to psychosis it brings us all to... a trillion dollar project to detach us all from reality, one way or another...

disjunctionsmag.com/articles/end...

1 month ago 237 85 10 31

Also: being "in the loop" in context of evaluating output seems like quality control in an assembly line-- only works when you know what the output is supposed to look like. But with writing you so often don't know whether that is what you would have thought/said if you had done the thinking.

1 month ago 25 6 1 0
"BE IT THEREFORE RESOLVED that we affirm the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom."

"BE IT THEREFORE RESOLVED that we affirm the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom."

College writing teachers have spoken, y'all.

The CCCC resolution affirming students' and teachers' right to refuse generative AI in the writing classroom passed by an overwhelming majority at the #4C26 Annual Business Meeting this past Friday, March 6.

Link to the full resolution below.

1 month ago 527 162 6 19
Advertisement
MLA, ACLS, and AHA lawsuit reveals use of ChatGPT in illegal termination of grants by DOGE. Motion for summary judgment asserts violations of the First Amendment; violations of the Equal Protection Clause; and violation of the separation of powers. mla.org/NEH-Lawsuit

MLA, ACLS, and AHA lawsuit reveals use of ChatGPT in illegal termination of grants by DOGE. Motion for summary judgment asserts violations of the First Amendment; violations of the Equal Protection Clause; and violation of the separation of powers. mla.org/NEH-Lawsuit

The MLA, @acls1919.bsky.social, and @historians.org have filed a motion for summary judgment in our lawsuit to restore the NEH. Discovery documents reveal that DOGE rather than the acting chair led grant terminations and targeted grants using ChatGPT. More at mla.org/NEH-Lawsuit

1 month ago 195 93 5 13

anthropic: I have made AI

tech nerds: you fucked up a perfectly good computer is what you did. look at it. it's got anxiety

1 month ago 2861 458 33 9

How I wept tonight.

“We can endure this, & be a guiding light through it, but only by recentering, by teaching citizens, not workers; power, not PowerPoint; aspiration, not apocalypse. Despair is how we lose. The classroom is where we battle it. All other battles flow from here.”

1 month ago 49 18 0 0

Good thread.

There are disabled people in the future.

The only reason you wouldn’t see them is if that society has gone full steam ahead on eugenics and they’re unable to live freely or be accommodated in public.

2 months ago 253 66 5 1

While reporting this, I had something happen that's never happened. A comms rep for one of the co's disputed my reporting and said what I was telling them was untrue because it was not in Grok, xAI's chatbot.

I was looking directly at the files. And this person was using AI to challenge the truth.

2 months ago 9539 3226 199 224
Preview
Writing Classes Are About Writing, Not AI-Aided Production If we want students to learn to write, AI tools shouldn’t have much of a role. If we don’t think students need to learn to write anymore, I’m not sure what we’re doing here.

Pretty much every article I read about "integrating" AI into the writing classroom brings me back to the conclusion I work through here: We should teach writing, not document production. www.insidehighered.com/opinion/colu...

3 months ago 93 33 3 4

Slop man is mad

3 months ago 173 23 4 0