Advertisement · 728 × 90

Posts by Bud Hunt

Post image

Coherence matters, y’all.

1 month ago 0 0 0 0
Post image

This book is well worth a few moments of your time.

1 month ago 0 1 0 0
Post image

He’s not wrong.

1 month ago 0 0 0 0

"Is it real if it works? Is it real if I feel
less alone? . . ."

- from "Perhaps Mercy" by Qianxi Chelsea Hu, 17

in the NYT Learning Network collection of student work about AI. Worth your time.
 
full poem: static01.nyt.com/newsgraphics...
 
full collection: www.nytimes.com/2026/02/26/l...

1 month ago 0 0 0 0
Preview
AI chatbots operating in Colorado would have to take steps to protect kids, prevent suicides under bipartisan bill The measure represents the legislature’s latest attempt to address artificial intelligence as the technology becomes increasingly prevalent

The measure represents the Colorado legislature’s latest attempt to address artificial intelligence as the technology becomes increasingly prevalent coloradosun.com/2026/02/25/c... #copolitics #coleg

1 month ago 5 2 0 0
Preview
Einstein - AI Education Companion Einstein logs into Canvas and does your homework automatically. He has his own computer — he can watch lectures, read essays, write papers, and participate in discussions.

Any student who can be replaced by a computer should be, I guess.

companion.ai/einstein

#thisisfine

1 month ago 1 1 1 0

First Waymo ride today. The robot car yielded to the pedestrian delivery robot in the crosswalk while another robot car approached.

The future happened already. It’s just not evenly distributed.

I’m glad my robot waited its turn. That bodes well.

1 month ago 0 0 0 0

Done.

2 months ago 1 0 0 0
Preview
Stump I wasn’t looking for poetry when I posted the advertisement. I was looking for something to do with my body.

Wow. Just wow.

I wasn’t looking for a poem when I found this essay. Which is really a poem.

poets.org/text/stump

2 months ago 0 0 0 0
Post image

Seriously. Worth your time.

www.brookings.edu/articles/a-n...

3 months ago 0 0 0 0
Advertisement
Preview
A new direction for students in an AI world: Prosper, prepare, protect | Brookings This report explores the potential risks generative AI poses to students and outlines what we can do now to minimize them.

The Brookings report on AI and children is well worth your time. Still working through it, but I’m finding plenty of thoughtfulness.

If you are a technologist or an educational leader, please read the executive summary. Please.

www.brookings.edu/articles/a-n...

3 months ago 0 0 0 0
Preview
In Memoriam | America's Essential Data Highlighting examples of federal datasets that have been discontinued.

This, though, is.

Librarians and data nerds are doing the work.

essentialdata.us/in-memoriam....

4 months ago 0 0 0 0

This isn’t great.

4 months ago 0 0 0 0
Preview
The Sign as You Exit the Artist’s Colony Says “The Real World” Quiet is not silence. Silence is absolute like never and forever. Quiet invites attention to cicadas, the warbling vireo on the wire,

This poem is full of surprises.

poets.org/poem/sign-yo...

4 months ago 0 0 0 0

Snow!

4 months ago 2 0 0 0
Preview
Jon Stewart on the Perilous State of Late Night and Why America Fell for Donald Trump — The New Yorker Radio Hour The “Daily Show” host talks with David Remnick about his contract with Paramount Skydance, the government’s attack on political satire, and how our institutions got so weak.

“We’ve lost the ability to love people because we litmus test them at every point. . .” ibid

overcast.fm/+AA3KNhgbFXg...

5 months ago 0 0 0 0
Preview
Jon Stewart on the Perilous State of Late Night and Why America Fell for Donald Trump — The New Yorker Radio Hour The “Daily Show” host talks with David Remnick about his contract with Paramount Skydance, the government’s attack on political satire, and how our institutions got so weak.

“I love a good argument. But I also love grace.”

A fine framing of public discourse from Jon Stewart interviewed by David Remnick on the New Yorker Radio Hour.

overcast.fm/+AA3KNhgbFXg...

5 months ago 0 0 0 0

If somebody suggests you watch _The Life of Chuck_, you should listen to them.

6 months ago 1 0 1 0
Preview
An Open Letter to Teachers Here in my neck of the woods, it’s the weekend before the start of classes. At my house, life got frantic this week as my wife, a high school language arts teacher, returned to work. It’…

First day of school here.

A good moment for an oldie - my advice to teachers at the start of a new year. Still advice I would give. Especially now.

Happy new year!

budtheteacher.com/blog/2008/08...

8 months ago 0 0 0 0
Advertisement
Preview
AOL Will End Its Dial-Up Internet Service (Yes, It’s Still Operating) The company said the service, synonymous with the early days of the internet, will be discontinued on Sept. 30.

Huh.

www.nytimes.com/2025/08/11/b...

8 months ago 0 0 0 0
Preview
Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. www.nytimes.com/2025/08/08/t...

8 months ago 0 0 0 0
Post image

The assumption that generative AI could be a "valuable partner" is unevidenced and the example activity is critical thinking work that could better be done in the absence of AI. It's thinking of something you COULD do with AI. Rather than what students SHOULD do to learn.

8 months ago 180 37 6 9
Help Sheet: Resisting AI Mania in Schools

K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term
“AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools
of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism
and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as
early as kindergarten, however, are questionable or fallacious. What follows are four of the most common
arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI
messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF

Argument: “Schools need to prepare students for the jobs of the future.”
● The skills employers seek haven’t changed much over the decades—and include a lot of
“soft skills” like initiative, problem-solving, communication, and critical thinking.
● Early research is showing that using generative AI can degrade these key skills:
○ An MIT study showed adults using chatGPT to help write an essay “had the lowest
brain engagement and ‘consistently underperformed at neural, linguistic, and
behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay,
often resorting to copy-and-paste by the end of the study.”
○ A business school found those who used AI tools often had worse critical thinking
skills “mediated by increased cognitive offloading. Younger participants exhibited
higher dependence on AI tools and lower critical thinking scores.”
○ Another study revealed those using “ChatGPT engaged less in metacognitive
activities...For instance, learners in the AI group frequently looped back to ChatGPT for
feedback rather than reflecting independently. This dependency not only undermines
critical thinking but also risks long-term skill stagnati…

Help Sheet: Resisting AI Mania in Schools K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term “AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as early as kindergarten, however, are questionable or fallacious. What follows are four of the most common arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF Argument: “Schools need to prepare students for the jobs of the future.” ● The skills employers seek haven’t changed much over the decades—and include a lot of “soft skills” like initiative, problem-solving, communication, and critical thinking. ● Early research is showing that using generative AI can degrade these key skills: ○ An MIT study showed adults using chatGPT to help write an essay “had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.” ○ A business school found those who used AI tools often had worse critical thinking skills “mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores.” ○ Another study revealed those using “ChatGPT engaged less in metacognitive activities...For instance, learners in the AI group frequently looped back to ChatGPT for feedback rather than reflecting independently. This dependency not only undermines critical thinking but also risks long-term skill stagnati…

Argument: “AI is a tool, just like a calculator.”
● Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI
search returned, among other falsehoods, that cats have gone to the moon, that Barack
Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to
improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested,
unregulated products while AI firms engage elected officials in actively resisting regulation.
● Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot
recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of
methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of
your ability.’" Users have received threatening messages from chatbots.
● Calculators don’t pose mental health risks because they aren’t potentially addictive or
designed to encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have
been designed this way—and this has led to dreadful mental health outcomes for some,
including users in a New York Times report. Alleging a chatbot encouraged their teen to die
by suicide, parents in Florida filed a lawsuit against its maker.
● Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg
shared screenshots of interactions with one that she asked to describe several of her essays.
It spewed out invented material, showing the chatbot hadn’t actually accessed and processed
the essays. After much prodding, it “admitted” it had only acted as though it had done that
requested work, spit out mea culpas—and went on to invent or “lie” again.
● Calculators can’t be used to spread propaganda. AI tools, though, including those meant for
schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne
Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In
thi…

Argument: “AI is a tool, just like a calculator.” ● Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI search returned, among other falsehoods, that cats have gone to the moon, that Barack Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested, unregulated products while AI firms engage elected officials in actively resisting regulation. ● Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of your ability.’" Users have received threatening messages from chatbots. ● Calculators don’t pose mental health risks because they aren’t potentially addictive or designed to encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have been designed this way—and this has led to dreadful mental health outcomes for some, including users in a New York Times report. Alleging a chatbot encouraged their teen to die by suicide, parents in Florida filed a lawsuit against its maker. ● Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg shared screenshots of interactions with one that she asked to describe several of her essays. It spewed out invented material, showing the chatbot hadn’t actually accessed and processed the essays. After much prodding, it “admitted” it had only acted as though it had done that requested work, spit out mea culpas—and went on to invent or “lie” again. ● Calculators can’t be used to spread propaganda. AI tools, though, including those meant for schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In thi…

Argument: “AI won’t replace teachers, but it will save them time and improve their
effectiveness.”
● Adding edtech does not necessarily save teachers time. A recent study found that learning
management systems sold to schools over the past decade-plus as time-savers aren’t
delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom,
Canvas) is often burdensome and contributes to burnout. As one teacher put it, it “just adds
layers to tasks.”
● “Extra time” is rarely returned to teachers. AI proponents argue that if teachers use AI tools
to grade, prepare lessons, or differentiate materials, they’ll have more time to work with
students. But there are always new initiatives, duties, or committee assignments—the unpaid
work districts rely on—to suck up that time. In a culture of austerity and with a USDOE that is
cutting spending, teachers are likely to be assigned more students. When class sizes grow,
students get less attention, and positions can be cut.
● AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced.
Schools are already doing it: Arizona approved a charter school in which students spend
mornings working with AI and the role of teacher is reduced to “guide.” Ed tech expert Neil
Selwyn argues those in “industry and policy circles...hostile to the idea of expensively trained
expert professional educators who have [tenure], pension rights and union protection...
[welcome] AI replacement as a way of undermining the status of the professional teacher.”
● Tech firms have been selling schools on untested products for years. Technophilia has led
to students being on screens for hours in school each week even when their phones are
banned. Writer Jess Grose explains, “Companies never had to prove that devices or software,
broadly speaking, helped students learn before those devices had wormed their way into
America’s public schools.” AI products appear to be no different.
● Efficiency is not effectiveness. “…

Argument: “AI won’t replace teachers, but it will save them time and improve their effectiveness.” ● Adding edtech does not necessarily save teachers time. A recent study found that learning management systems sold to schools over the past decade-plus as time-savers aren’t delivering on making teaching easier. Instead, they found this tech (e.g. Google Classroom, Canvas) is often burdensome and contributes to burnout. As one teacher put it, it “just adds layers to tasks.” ● “Extra time” is rarely returned to teachers. AI proponents argue that if teachers use AI tools to grade, prepare lessons, or differentiate materials, they’ll have more time to work with students. But there are always new initiatives, duties, or committee assignments—the unpaid work districts rely on—to suck up that time. In a culture of austerity and with a USDOE that is cutting spending, teachers are likely to be assigned more students. When class sizes grow, students get less attention, and positions can be cut. ● AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced. Schools are already doing it: Arizona approved a charter school in which students spend mornings working with AI and the role of teacher is reduced to “guide.” Ed tech expert Neil Selwyn argues those in “industry and policy circles...hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection... [welcome] AI replacement as a way of undermining the status of the professional teacher.” ● Tech firms have been selling schools on untested products for years. Technophilia has led to students being on screens for hours in school each week even when their phones are banned. Writer Jess Grose explains, “Companies never had to prove that devices or software, broadly speaking, helped students learn before those devices had wormed their way into America’s public schools.” AI products appear to be no different. ● Efficiency is not effectiveness. “…

Argument: “Students are already using AI, so we have to teach them ethical use.
● If schools want ethical students, teach ethics. More students are using AI tools to cheat, an
age-old problem they make much easier. This won’t be addressed by showing students how
to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved
by teaching about plagiarism) or understand academic integrity (solved by teaching and
enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose.
The latter aren’t solved by attempting to redirect students motivated and able to cheat.
● Students can be educated on the ethics of AI without encouraging use of AI tools. They can
be taught, as part of media literacy and social media safety programs, about AI’s potential
and applications as well as how it can enable predation, perpetuate bias, and spread
disinformation. They should be taught about the risks of AI and its various social, economic,
and environmental costs. Giving a nod to these issues while integrating AI throughout
schools sends a strong message: the schools don’t really care and neither should students.
● Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools
to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon,
who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a
biased and likely politically-motivated AI feature. The Chicago Sun-Times published a
summer reading list including nonexistent books—yet teachers are told to use the same tools
to do similar work. Educators using AI to cut corners can strike students as hypocritical.
● The many costs of AI call into question the possibility of ethical AI use. These include:
○ Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling
these resources from their communities—which tend to be lower-income—straining the grid,
and raising household cos…

Argument: “Students are already using AI, so we have to teach them ethical use. ● If schools want ethical students, teach ethics. More students are using AI tools to cheat, an age-old problem they make much easier. This won’t be addressed by showing students how to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved by teaching about plagiarism) or understand academic integrity (solved by teaching and enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose. The latter aren’t solved by attempting to redirect students motivated and able to cheat. ● Students can be educated on the ethics of AI without encouraging use of AI tools. They can be taught, as part of media literacy and social media safety programs, about AI’s potential and applications as well as how it can enable predation, perpetuate bias, and spread disinformation. They should be taught about the risks of AI and its various social, economic, and environmental costs. Giving a nod to these issues while integrating AI throughout schools sends a strong message: the schools don’t really care and neither should students. ● Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon, who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a biased and likely politically-motivated AI feature. The Chicago Sun-Times published a summer reading list including nonexistent books—yet teachers are told to use the same tools to do similar work. Educators using AI to cut corners can strike students as hypocritical. ● The many costs of AI call into question the possibility of ethical AI use. These include: ○ Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling these resources from their communities—which tend to be lower-income—straining the grid, and raising household cos…

I put together a 4-page doc for those wary of the rush to integrate in K-12 schools (though much applies beyond).

Four of the main arguments for teachers using AI tools & introducing kids to AI as early as kindergarten are addressed with rebuttals linked to sources.

9 months ago 310 118 22 31

The first:

9 months ago 9 3 1 0
Preview
Constitution Breakdown #1: Nikole Hannah-Jones — 99% Invisible This month, Roman and Elizabeth discuss the Preamble, alongside Nikole Hannah-Jones.

If you were thinking about rereading the Constitution with your book club, but you’re maybe in between book clubs, this podcast is for you. Highly recommend.

overcast.fm/+AAyIOyIrdZo

8 months ago 0 0 0 0

Kicking off the 2025 CASE Convention. Learning with Colorado colleagues and educational leaders.

8 months ago 0 0 0 0

I dunno. Maybe it’s not the best use of time to mock or parody or amplify the Coldplay video thing.

8 months ago 0 0 0 0
Post image

This morning’s beach read. Catching up on some back issues.

9 months ago 0 0 0 0
Advertisement

Getting to the fun faster, perhaps.

9 months ago 0 0 1 0

It feels like lots of what I’m seeing.

9 months ago 0 0 1 0