Advertisement · 728 × 90

Posts by Greg Palermo

I know this isn’t a new thing, but it’s a silly assumption that faulty and students can just pivot to “remote learning” when campus is closed for severe weather.

2 months ago 1 0 1 0

When we say "explicitly neo-Nazi" we mean channels that fly swastikas and praise Hitler. That's the zone of internet where DHS is apparently finding inspiration for their X posts. It's ultra unlikely they would have ever encountered that song if they weren't in that space, imo.

3 months ago 2101 733 9 15
Preview
FCC revises Verizon phone unlocking rules after significant fraud issues The Federal Communications Commission revised a long-standing rule that required Verizon Communications to unlock its mobile phones 60 days after activation, which it said is costing the telecommunica...

Verizon and the corrupt Trump FCC are killing rules that make it easier to switch carriers via fully unlocked phones.

The decision is based on a lie that adhering to these public interest provisions is increasing "fraud," a lie Reuters is happy to parrot in its headline

journalisms!

3 months ago 198 82 3 7

When did reality become slop too?

3 months ago 2 0 0 0

Something else that really struck me in this report is this paragraph. "AI is doing things for students that they used to enjoy."

3 months ago 138 76 1 2

There’s at least one battle that I hope saturated fat wins.

3 months ago 31 0 0 0

This is a widespread problem in legal academia. It annoys the hell out of me. Thankfully, unlike other scholars, I have seen through the bullshit and have a novel solution to the problem. In this article ...

3 months ago 73 7 2 0

Trump: the fake news say he wants election canceled

Trump, literally two fucking seconds before: they should cancel the election

3 months ago 1174 303 60 9
Preview
Who’s who at X, the deepfake porn site formerly known as Twitter A look inside Elon Musk’s big tent

There we go. That there’s a headline.

3 months ago 13554 3570 156 193

We've used *Groq* for things, and I *really* want to put in a footnote every time that says "No, not that one."

3 months ago 0 0 0 0
Advertisement

Relatedly, today I was editing a wonderful essay for a forum I’m curating on AI in which the author expertly distinguishes between commercial GenAI and truly fantastic uses of other forms of AI for digital humanities, for an audience of not-DH people. More of that please.

3 months ago 61 6 2 2

My MiL asked me to help her with her Facebook app recently and it was appallingly full of scams, propaganda and AI bullshit, and nothing at all about the people she actually knew and followed. I've kept my Facebook locked down for so long I forgot how bad it was if you're not tech-savvy.

3 months ago 1050 115 28 8
Preview
English Majors at Work English Majors at Work: Career and Life Pathways details the professional superpowers—the many marketable skills—gained from studying literature, creative writing, film, and popular culture. It prepar...

Very excited about this first online sighting of ENGLISH MAJORS AT WORK: CAREER AND LIFE PATHWAYS:

www.rutgersuniversitypress.org/english-majo...

3 months ago 37 12 1 2
Post image

Trump isn't winning, so don't act like he's winning.

3 months ago 63 8 3 0
Post image

From the NYT today, 2 charts on the brokenness of American politics:

3 months ago 385 139 16 11

Seriously. Does no one realize that it literally means nothing, while the other ones do? It’s like they assume it has a meaning they aren’t party to or something, which is in turn the most embarrassing “old” thing to watch.

3 months ago 0 0 0 0
Preview
The Campus Crisis Toolkit

Good news! The full table of contents for THE CAMPUS CRISIS TOOLKIT, edited by @thetattooedprof.bsky.social and Lisa Di Bartolommeo, is now available on the @sunypress.bsky.social website: sunypress.edu/Books/T/The-.... Follow the link or see next post for screenshots. 🤗

6 months ago 82 45 4 8
Preview
Flu Cases Climb to Highest Levels in New York City in a Decade

“New York City’s syndromic surveillance system, which collects information about every patient who visits an emergency room, reported 9,857 visits for “influenza-like illness” last week. That was higher than in the worst weeks of the 2017-18 or 2024-25 flu seasons, both ranked as “high severity”

3 months ago 260 117 1 23
Andrew Kadel @DrewKadel@social.coop

My daughter, who's had a degree in computer science for 25 years, posted this about ChatGPT on Facebook. It's the best description I've seen.

Something that seems fundamental to me about ChatGPT, which gets lost over and over again:

When you enter text into it, you're asking "What would a response to this sound like?"

If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!

But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else.

It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.

Andrew Kadel @DrewKadel@social.coop My daughter, who's had a degree in computer science for 25 years, posted this about ChatGPT on Facebook. It's the best description I've seen. Something that seems fundamental to me about ChatGPT, which gets lost over and over again: When you enter text into it, you're asking "What would a response to this sound like?" If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else. It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.

The only thing ChatGPT ever does.

8 months ago 3546 1489 43 50
Now, though, we have something that was simply not possible a few years ago: mechanically perfect prose with complex layers of imprecise ideas. Beautifully written cues of mechanical correctness can now hide malformed ideas. If we expect fully formed ideas when encountering mechanically correct prose, there is a subtle expectation that there are, indeed, fully formed ideas already present in the text. It’s like eating a beautiful mass produced store-bought cookie: pleasing aesthetics hide cheap ingredients. Reading all of this synthetic, AI-driven text could actually be bad for us, cognitively and physically.

Now, though, we have something that was simply not possible a few years ago: mechanically perfect prose with complex layers of imprecise ideas. Beautifully written cues of mechanical correctness can now hide malformed ideas. If we expect fully formed ideas when encountering mechanically correct prose, there is a subtle expectation that there are, indeed, fully formed ideas already present in the text. It’s like eating a beautiful mass produced store-bought cookie: pleasing aesthetics hide cheap ingredients. Reading all of this synthetic, AI-driven text could actually be bad for us, cognitively and physically.

My use of “malformed” implies that these tools are making mechanically correct prose awash in weirdly imprecise ways that, as a writing teacher and writer, I think needs a hatchet. I mean this almost literally: I end up as a reader hacking my way through the words and sentences. As readers, we have to work too hard with AI writing. You end up hatcheting your way through the various buzzwords or how they fit together with logical connectors. And in doing so, as a reader, you end up overloading your mind. Let me take you through what I mean.

My use of “malformed” implies that these tools are making mechanically correct prose awash in weirdly imprecise ways that, as a writing teacher and writer, I think needs a hatchet. I mean this almost literally: I end up as a reader hacking my way through the words and sentences. As readers, we have to work too hard with AI writing. You end up hatcheting your way through the various buzzwords or how they fit together with logical connectors. And in doing so, as a reader, you end up overloading your mind. Let me take you through what I mean.

To give you a better sense of what I mean, let’s contrast firm reading with a type of reading perhaps more familiar: close reading. Close reading, in the literary sense, represents a deep engagement with the source material. When you close read, you are actively engaged with a passage, often down to the level of sentences or even words. We close reading Hamlet’s soliloquy or Shelley’s “Ozymandias.” Yet, the reason we can close read in the first place is that we assume there is worthwhile meaning already there. Close reading privileges something thoughtful lurking beneath the surface. A reader needs to slow down, to get “close,” to find it.

With firm reading, we instead ask, “is there any meaning at all in this text?” Firm reading wonders whether anything is there in the source material, conceptually or mechanically. Firm reading, cynically perhaps, is thus a disposition toward disbelief. If close reading asks, “What are the possible interpretations here?” then firm reading asks, “Is interpretation possible?”

To give you a better sense of what I mean, let’s contrast firm reading with a type of reading perhaps more familiar: close reading. Close reading, in the literary sense, represents a deep engagement with the source material. When you close read, you are actively engaged with a passage, often down to the level of sentences or even words. We close reading Hamlet’s soliloquy or Shelley’s “Ozymandias.” Yet, the reason we can close read in the first place is that we assume there is worthwhile meaning already there. Close reading privileges something thoughtful lurking beneath the surface. A reader needs to slow down, to get “close,” to find it. With firm reading, we instead ask, “is there any meaning at all in this text?” Firm reading wonders whether anything is there in the source material, conceptually or mechanically. Firm reading, cynically perhaps, is thus a disposition toward disbelief. If close reading asks, “What are the possible interpretations here?” then firm reading asks, “Is interpretation possible?”

Far and away my favorite writer and thinker right now on AI writing is @johnrgallagher.bsky.social

3 months ago 30 5 1 0
Advertisement
Preview
Journal of Interactive Technology and Pedagogy, no. 27 | Journal of Interactive Technology and Pedagogy | Manifold @CUNY <h3>Edited by Patricia Belen, Stefano Morello, Gregory J. Palermo, Danica Savonick, and Brandon Walsh</h3> <p>“More students in a single classroom; fewer instructors to engage them. Extravagant AI co...

Very excited to share this! The new special issue of @jitp.bsky.social is on Minimalist DH Pedagogy. Co-edited by @danicasavonick.bsky.social @palermog.bsky.social @veritas44.bsky.social Patricia Belen and me. LOTS of great stuff in here.

cuny.manifoldapp.org/projects/jit...

3 months ago 36 16 1 7

Why would we want to OWN any copies of scrolls, honey, the Library of Alexandria has plenty!

4 months ago 134 27 2 0

This study show that using poems to jailbreak LLMs is... super effective? What the heck.

5 months ago 301 99 15 38
Preview
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking. "Students are not just undermining their ability to learn, but to someday lead."

To “my students and to anyone who might listen, I say: Don’t surrender to AI your ability to read, write and think when others once risked their lives and died for the freedom to do so.”

www.huffpost.com/entry/histor...

5 months ago 1719 669 37 121

On ai use by university students

5 months ago 8 5 2 0

WE WON. I am *begging* you to take note of who did this. *Not* UCLA admin—they’re still scuttling around behind closed doors, attempting to appease—but FACULTY AND STAFF, led by AAUP.

5 months ago 3914 1077 22 21
As one Massachusetts school administrator recently said; this moment with AI is remarkably like the moment when we were introduced to asbestos. Yes, it had some remarkably promising characteristics – fireproofing! – and had some real utility in science, research, and industrial applications. But a profit-driven industry bullied us into inserting it everywhere; into our homes and schools and public spaces, before we really understood the risks. This resulted in decades, if not centuries, of illness, injuries, deaths, and the astronomical financial burden of trying to remove the stuff.

As you, the leaders and policymakers in our schools, craft an AI policy for our district, we the undersigned call on you to:

1. Ban AI tools into the classroom, protect our students and teachers from de-skilling and allow them the space and time to engage in assignments themselves.

2. Resist any direct financial relationship or contracts with AI providers, as well as the “training” they might offer.

3. Provide a digital literacy curriculum to help students navigate the current digital landscape, and promote critical engagement with technology.

4. Guarantee that anywhere generative AI has already entered our classrooms or curriculum, an opt-out will allow students and teachers to refuse the use of these products at no risk to their grades, progress or employment.

As one Massachusetts school administrator recently said; this moment with AI is remarkably like the moment when we were introduced to asbestos. Yes, it had some remarkably promising characteristics – fireproofing! – and had some real utility in science, research, and industrial applications. But a profit-driven industry bullied us into inserting it everywhere; into our homes and schools and public spaces, before we really understood the risks. This resulted in decades, if not centuries, of illness, injuries, deaths, and the astronomical financial burden of trying to remove the stuff. As you, the leaders and policymakers in our schools, craft an AI policy for our district, we the undersigned call on you to: 1. Ban AI tools into the classroom, protect our students and teachers from de-skilling and allow them the space and time to engage in assignments themselves. 2. Resist any direct financial relationship or contracts with AI providers, as well as the “training” they might offer. 3. Provide a digital literacy curriculum to help students navigate the current digital landscape, and promote critical engagement with technology. 4. Guarantee that anywhere generative AI has already entered our classrooms or curriculum, an opt-out will allow students and teachers to refuse the use of these products at no risk to their grades, progress or employment.

Love to see community action against this AI nonsense! neighborhoodview.org/2025/11/13/d...

5 months ago 1098 462 16 40
I stared at my terminal facing those red error messages that I hate to see. An AWS [Ama-
zon Web Services] error glared back at me. I didn’t want to figure it out without AI’s
help.
After 12 years of coding, I’d somehow become worse at my own craft. And this isn’t
hyperbole—this is the new reality for software developers.
Namanyay Goel (2025, n.p.)
To show how serious the situation has become, one need only think about our last round of mark-
ing essays by AI undergraduate students. What jumps out of the page, for us, is something that con-
tradicts the rhetoric our colleagues promote, namely, it is evident that students need moreessay work
assigned to them, not less (Kosmyna et al. 2025). Almost every essay was poor on some dimension
that does not befit students in their final years of undergraduate study: the writing is often super-
ficial, the language does not reflect students’ stage and knowledge, citations are frequently misused,
and (most shockingly because it is so easy), the reference style is not applied correctly. This means that
the constellation of skills required to write a good academic essay has not been nurtured enough or
has atrophied. What this means is also that regardless of factual LLM use by the students, their ability
to write essays is on the floor, and not, as many seem to claim, at ceiling where one cannot differentiate
a good essay from a plagiarised or otherwise dishonest attempt of an essay. Importantly, the training
of writing skills should be done in the context of critical reckoning with the norms and pressures sur-
rounding the work expected of students (i.e. high study load, so-called student excellence, financial
pressure to graduate, etc.).
In this context, it is also important to be wary of arguments that wrongly position LLMs as, mak-
ing education more democratic, accessible, and equitable by removing language barriers, removing
unequal access to mentorship, and increase diversity, equity and inclusion in…

I stared at my terminal facing those red error messages that I hate to see. An AWS [Ama- zon Web Services] error glared back at me. I didn’t want to figure it out without AI’s help. After 12 years of coding, I’d somehow become worse at my own craft. And this isn’t hyperbole—this is the new reality for software developers. Namanyay Goel (2025, n.p.) To show how serious the situation has become, one need only think about our last round of mark- ing essays by AI undergraduate students. What jumps out of the page, for us, is something that con- tradicts the rhetoric our colleagues promote, namely, it is evident that students need moreessay work assigned to them, not less (Kosmyna et al. 2025). Almost every essay was poor on some dimension that does not befit students in their final years of undergraduate study: the writing is often super- ficial, the language does not reflect students’ stage and knowledge, citations are frequently misused, and (most shockingly because it is so easy), the reference style is not applied correctly. This means that the constellation of skills required to write a good academic essay has not been nurtured enough or has atrophied. What this means is also that regardless of factual LLM use by the students, their ability to write essays is on the floor, and not, as many seem to claim, at ceiling where one cannot differentiate a good essay from a plagiarised or otherwise dishonest attempt of an essay. Importantly, the training of writing skills should be done in the context of critical reckoning with the norms and pressures sur- rounding the work expected of students (i.e. high study load, so-called student excellence, financial pressure to graduate, etc.). In this context, it is also important to be wary of arguments that wrongly position LLMs as, mak- ing education more democratic, accessible, and equitable by removing language barriers, removing unequal access to mentorship, and increase diversity, equity and inclusion in…

This is what LLMs reduce academics to: rehashing basic research skills even in the final year students. It's honestly heartbreaking. It's not just random Bsky people, we see it at work all the time.

See section 3.7 here: doi.org/10.5281/zeno...

5 months ago 38 6 1 0

🤦🏻‍♂️

5 months ago 0 0 1 0
Advertisement

You know that you can stop posting through this and admit a mistake right?

5 months ago 0 0 0 0