Here is a conversation I had with some awesome people, about what the human advantage will be (if anything) going forward.
Posts by Jason Gulya
Here's a question I think about a lot.
How do we teach process without policing process?
(I think there are quite a few answers, in post-process theory and Writing Studies. I think the big question is how we expand/adapt those answers.)
Philosopher Donald Schon distinguished between reflection in action and reflection of action.
REFLECTION IN ACTION takes place during the creative process.
REFLECTION OF ACTION takes place after the creative process.
It doesn’t seem difficult, to me, to relate this to The Age of AI.
I think we’re going to have a really difficult time teaching process-creation, deep thinking, and metacognition if we don’t serious commit to grade reform.
I was interviewed for this The Chronicle of Higher Education report on AI and assessment:
lnkd.in/eXrN-FHi
I talk about my nagging concern that we’ll use AI to double down on disengaged, “schoolish” education.
It’s great to see tech bosses espouse the value of things like “Renaissance learning.”
But I do want to emphasize that (1) the Liberal Arts has a long and rich history of doing exactly this kind of thing and (2) we are being gutted at an alarming rate.
Right!
What happens when we encourage students to create processes not only with the end-product in mind, but whether that process is helping them develop skills in a cognitively healthy way?
This question links process-creation to metacognition.
How do we encourage our students to create cognitively-healthy processes for creating in an engaging increasingly dominated by hyper-efficiency, convenience, and disconnection? How do we empower our students to develop and iterate those very kinds of processes?
I’d love to see more focus on encouraging cognitive fitness and linking that to the project of teaching AI literacy/fluency/awareness.
Deciding whether and how we should engage with AI to produce something involves not only a great deal of metacognitive awareness, but decent cognitive fitness.
I'm all about process!
I understand the attraction to AI Graders, in some ways.
But I think using them gives far too much away.
I think they'll hurt our relaionships to students and will make learning even more transactional.
Data brokers buy up huge amounts of information from cell phones and browsers to sell for targeted advertising. But the government, including ICE, also buys the data. n.pr/4t5Y8GQ
Here’s my article in Chronicle of Higher Ed, on agentic AI and how companies are using this technology to create Transactional Education 2.0.
It focuses on the Einstein app that came and went in a hurry.
But the implications are much larger.
www.chronicle.com/article/will...
I totally get why people are moving to in-person, observed assessment.
But I think we need to be very careful about design for UDL and accessibility.
Check out this post from Sarah Silverman, an expert on UDL.
I continue to be fascinated/worried about the marketing of AI as "first-stab" technology.
But I do worry about happens when that approach becomes the default, and we get so used to an AI-first and human-second collaborative model.
My virtual keynote (“Are Traditional Grades Making It Harder for Education to Adapt to AI?”) has gotten about 24k views in a couple of weeks.
I’m hoping it struck a chord.
m.youtube.com/watch?v=uDCL...
I'm glad it's not just me! Honestly, I've played with those infographics a few times, and I just don't get it.
Am I the only one who doesn’t find AI-generated graphics useful?
Every time, I have the same impression: “wow, that’s a really noisy image…”
I’ve never had an AI-generated graphic clear something up for me or simplify an explanation for me.
They mostly just hurt my head.
I almost always ask my students to self-assess their work before I weigh in.
Doing so makes my comments so much better.
Because I can ground my comments not only in the final process/product, but in their knowledge of that process/product.
——
Image: a photo of the book I’m reading now.
Here’s my worry about process-oriented teaching.
I worry that we’ll bring out students’ processes, so that we can measure them and evaluate them (and thus, flatten them out).
I worry that we’ll make the same mistake that we made with products, using process-teaching as a way to demand compliance.
I’m both laughing and crying.
Can you use gen-AI responsibly, if the tools themselves were built through irresponsible methods?
Asking for a friend.
The ongoing integration of AI and similar technologies into everyday life does not mean that every classroom needs to use AI extensively.
In fact, cultivating AI-free learning environments could make students (and us) more aware of what’s actually going on in the world and our place within it.
Same here!
I hadn’t come across it before. Thanks, Brett!
I think AI-free spaces will continue to be powerful for learning.
But…
I think they’ll have to be consensual spaces, where students opt into, create, and maintain the AI-free space.
Because I’m not sure if an imposed AI-free space will be viable for much longer (if it even is now).
Right on!
Exactly.
One of the things I really like about some forms of alternative grading is that it's iterative by design.
It's one reason why I really like "edit to mastery" (though I don't love the word master here) as a model form of assignment.
Thanks, @liznorell.bsky.social!!