Advertisement · 728 × 90

Posts by Nigel Caplan

The humanities are leading the way - rejecting "bite-sized" so-called English curricula (if you see "pre-AP English" on your 9th grader's schedule - RUN and SHOUT) and expanding the canon. But the forces are arrayed against us - publishers, ed tech corps, and the Taylorized massification of ed.

2 days ago 1 0 0 0
Preview
Opinion | You Can’t Game Your Way to a Real Education

This is almost right IMO. Yes, we need to put education on a restrictive tech diet of intentionally selected ethical platforms and replace screens with human interaction. But we can't do so without recognizing the pernicious forces driving ed's tech addiction. www.nytimes.com/2026/04/19/o...

2 days ago 1 0 1 0

Ways to describe 21 hrs: barely a day / about twice the running length of the LOTR extended cut movies / one and half performances of Wagner's Ring Cycle / 5-6 rounds of golf / a tenth of a historic space mission / not as long as a Yom Kippur fast / one and a bit day shifts at The Pitt / a marathon.

1 week ago 0 0 0 0

One of the great things about humans writing human language is the active choice you have to frame readers' responses among, say, they talked for 21 hours / a day / one day / less than a day/ a grueling marathon session (a trained teen can run a marathon in 4 hours). The more you know.

1 week ago 0 0 0 0

Some journals in my field do require these declarations but I would like to see them distributed with the abstract so I can decline to provide free labor to them in the first place.

1 week ago 0 0 0 0

I am going to insist on seeing these "AI declarations" before agreeing to review in future. Save everyone wasted time. The article I reviewed this week that had been "edited" using AI slop was abysmal.

1 week ago 14 4 1 0

Animal Parm

1 month ago 2 1 0 0

Exactly. Now do "chromebooks"

2 months ago 2 0 0 0
"Wlimington's spelling bee content registration opens"

"Wlimington's spelling bee content registration opens"

The (Delaware) News Journal prints a spelling error in a headline about ...

2 months ago 1 0 0 0
Advertisement
Preview
Opinion | Students Are Skipping the Hardest Part of Growing Up

Exactly. And bonus points for "Cyrano as service". This is why we need enforced limits and strong policies that do not acquiesce to big tech. And why I am unimpressed by the academic trend of "look at the AI tool I made which is ok really but no one will ever use"

www.nytimes.com/2026/01/30/o...

2 months ago 0 0 0 0

It was not

2 months ago 0 0 0 0
Preview
Students Are Finding New Ways to Cheat on the SAT

The only surprising part of this article is the correction at the end, which is truly a work of art

www.nytimes.com/2026/01/28/u...

2 months ago 1 0 0 0

Bless you, graduate applications systems that only require an uploaded letter of recommendation without asking a bunch of pointless ranking questions. #academicsky

2 months ago 10 0 0 0
Preview
The risks of AI in schools outweigh the benefits, report says A new report warns that AI poses a serious threat to children's cognitive development and emotional well-being.

This is incoherent hype. They find that "AI" products diminish critical thinking, knowledge, and creativity; increase inequity; and threaten social and emotional development. So the solution is ... make more of them and force "AI literacy" on everyone? www.npr.org/2026/01/14/n...

3 months ago 3 1 0 0

(b) privacy has nothing to do with language acquisition. The solution is, as always GOOD TEACHING: structured pair and group work, teachers trained to work with MLs, and most importantly, sufficient teachers with manageable workloads. Not "AI". Plus "AI offers privacy"? NPR has read the news, right?

3 months ago 0 0 0 0

Forgive me for being an actual ESL and SLA teacher, but: (a) "adjusting complexity" might not be helpful - our goal is to give MLs access to grade-level content; these products introduce errors and misrepresent material; and only a trained expert has the skill to simplify without dumbing-down

3 months ago 3 1 1 0

I got one paragraph in to find: "Teachers surveyed for the report said AI can be useful ... for students learning a second language. For example, AI can adjust the complexity of a passage depending on the reader's skill, and it offers privacy for students who struggle in large-group settings."

3 months ago 2 0 1 0

Grok generated harmful images - or did users prompt it to and coders give it the capacity to? Did the chatbot apologize or the billionaire owner? Can computer code take responsibility, take action, and make restitution? Or do these require human actors? That's critical thinking - asking WHO. /end

3 months ago 1 0 0 0

or person logically be the Agent or Sayer in the clause? Can a piece of software say/think/act? Does the AI product generate images or does a human Agent prompt it to do so? Who are these human or corporate Agents and Sayers? Then edit your writing accordingly. Eg "ChatGPT told me ..." - did it? 2/3

3 months ago 0 0 1 0
Advertisement

So much would be gained from teaching #sfl rather than structuralist grammar (subject, object, passive voice). The Participant responsible for an action verb is the Agent. The Participant responsible for a saying/thinking verb (verbal/mental process) is the Sayer. So ask yourself: can this thing 1/3

3 months ago 0 0 1 0
Preview
Grok turns off image generator for most users after outcry over sexualised AI imagery Editing function to be limited to paying subscribers after X threatened with fines and regulatory action

Bad writing from the Graudian. "Grok" can't be the agent of the verb "turn off" unless the AI product has the capacity to disable its own functions. It's the CORPORATION behind it which has the CHOICE whether to turn "features" like this on or off. Ascribe agency. www.theguardian.com/technology/2...

3 months ago 2 0 0 0

Nope. "Grok" doesn't "say" shit. Actually, it does say shit. That's all it can. But it certainly can't be trusted to provide an accurate account of corporate policy. Ask the bleedin' company for confirmation. That's literally your job.

3 months ago 0 0 0 0

Every journalist and editor who prints "[AI product] says/claims/thinks" should be forced to take a writing class or reassigned to the cooking section, or something. No not cooking, they'll print slop recipes. FFS, it's not hard. AI generators make shit up. Ask the *company* what it's doing.

3 months ago 1 0 0 0

I'm reading this while I bitch the pot for my morning scandal water

4 months ago 26 3 0 0

Ten archaic English slang phrases…

10. Got the morbs (sad)
9. Bumpsy (drunk)
8. Whittled as a penguin (very drunk)
8. Scandal water (tea)
7. Bitch the pot (pour the tea)
5. Sauce-box (mouth)
4. Cupid’s kettle drums (breasts)
3. Dash my wig (OMG)
2. Poked up (embarrassed)
1. Not up to dick (unwell)

4 months ago 302 90 20 59

Conversely, look for drops in apps and yields to the AI forward schools. I wouldn't pay for my child to go to Ohio State or Purdue now. No chance.

4 months ago 2 0 0 0
Preview
Opinion | We Owe It to College Students to Create Tech-Free Spaces

Again, the first University to do this will absolutely clean up. I hope it's mine. What do you say, Delaware?

www.nytimes.com/2025/12/19/o...

4 months ago 4 0 1 0

This is of course dumb and horrifying and a sad reflection of our inability to regulate and ban actual guns. But the irony of "AI" seeing arts, culture, and human experience as a threat is pretty spectacular. I doubt WaPo sees it.

4 months ago 2 0 0 0
Advertisement

best new student coinage of the semester: "discourse sematintics" - Tintin aux profs d'Anglais?

4 months ago 0 0 0 0

Yes, there are problems, but there are already ways to filter applications with simple algorithms (come on, we know they're doing it). The goal is obvious - it's not saving time, it's reducing staff. Students (or those high priced consultants) are going to start writing for the "AI" not humans. Ugh.

4 months ago 0 0 0 0