moby-dick, his eyes enormous: from hell's heart you STAB at moby? for hate's sake you spit your last BREATH at moby? oh! oh! the great shroud of the sea for ahab! the great shroud of the sea rolling on as it rolled five thousand years ago!!!!
Posts by Phil Choong
#TalkAboutHumanities
We need scholars across the humanities, because these are the fields where we study what it is to be human, to inhabit different identities, and to connect with each other, to be human together.
youtu.be/T7Lc6QNxolQ...
Appeal to romance readers! My gift to Romancelandia is this:
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
In my opinion, the main task of exploring propaganda is not demonstrating that it works, it's convincing people who are assured of their own intelligence & free will that it could work on people like them
Games are a unique art form. They are direct instructions, with an audience primed to follow them. They are unlike music. Unlike graphic arts. Unlike fiction.
Game design is about how instructions focus us, inspire us, outrage us, motivate us, reveal us, help us connect to each other.
and it is still infinity times better to be the kind of person who loves monarch butterflies, and the mosaic tilework on an ancient mosque, and kids whose names you'll never know running around a field kicking a soccer ball, than the kind who views them with indifference, but it is also painful
"To Asia, Africa and Oceania, we are looking back at you. We hear you can look up and see the moon right now. We see you, too. Ultimately, we will always choose Earth. We will always choose each other.” Artemis II Astronaut Christina Koch 🌕🌎
Red States: We're going to cut all the programs that don't support our theology
Blue States: We're going to cut the same programs, but because they don't support the business school
this is art
This is an essential read, and the closing on credentialing versus engagement and "centering the people for whom the stakes are highest" in particular. (H/T @peachfleurr.bsky.social @robin.berjon.com)
wouldn't it be funny if EVERYONE blocked this Attie AI account @ bsky.app/profile/atti... before they could do anything with it
As one of the authors of this thought I’d share a bit about how we got here and how you can do what we’ve done at you institution.
Over the two years I’ve been at the University of Edinburgh I’ve grown increasingly concerned by fellow academics uncritically using LLMs, especially OpenAIs’s ChatGPT.
The push to make syllabi “public” is not about transparency. It’s about creating conditions for outside intimidation and manufacturing pretexts for censorship.
One thing that AI evangelists seem to assume, particularly legal AI evangelists assume, is that the thinking and analysis is quick, and the writing is just tedious busywork that slows us down.
But the writing IS the thinking and analysis. You work out the thinking and analysis by writing it down.
this is so perfect as a phrasing
"AI asks that you buy into the idea that more data means being closer to The Truth."
I also explain this and take pains to really hammer home this is NOT EVEN TRUE in science, more data is NOT a better theory or account!
I wrote about AI --
but, really, about the close proximity to psychosis it brings us all to... a trillion dollar project to detach us all from reality, one way or another...
disjunctionsmag.com/articles/end...
Also: being "in the loop" in context of evaluating output seems like quality control in an assembly line-- only works when you know what the output is supposed to look like. But with writing you so often don't know whether that is what you would have thought/said if you had done the thinking.
"BE IT THEREFORE RESOLVED that we affirm the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom."
College writing teachers have spoken, y'all.
The CCCC resolution affirming students' and teachers' right to refuse generative AI in the writing classroom passed by an overwhelming majority at the #4C26 Annual Business Meeting this past Friday, March 6.
Link to the full resolution below.
MLA, ACLS, and AHA lawsuit reveals use of ChatGPT in illegal termination of grants by DOGE. Motion for summary judgment asserts violations of the First Amendment; violations of the Equal Protection Clause; and violation of the separation of powers. mla.org/NEH-Lawsuit
The MLA, @acls1919.bsky.social, and @historians.org have filed a motion for summary judgment in our lawsuit to restore the NEH. Discovery documents reveal that DOGE rather than the acting chair led grant terminations and targeted grants using ChatGPT. More at mla.org/NEH-Lawsuit
anthropic: I have made AI
tech nerds: you fucked up a perfectly good computer is what you did. look at it. it's got anxiety
How I wept tonight.
“We can endure this, & be a guiding light through it, but only by recentering, by teaching citizens, not workers; power, not PowerPoint; aspiration, not apocalypse. Despair is how we lose. The classroom is where we battle it. All other battles flow from here.”
Good thread.
There are disabled people in the future.
The only reason you wouldn’t see them is if that society has gone full steam ahead on eugenics and they’re unable to live freely or be accommodated in public.
While reporting this, I had something happen that's never happened. A comms rep for one of the co's disputed my reporting and said what I was telling them was untrue because it was not in Grok, xAI's chatbot.
I was looking directly at the files. And this person was using AI to challenge the truth.
Pretty much every article I read about "integrating" AI into the writing classroom brings me back to the conclusion I work through here: We should teach writing, not document production. www.insidehighered.com/opinion/colu...
Slop man is mad