I hope this helps people move away from process tracking. it feels worth pointing out that anyone can manually type out LLM outputs.
Posts by Stephanie Boragina
“So the hollowing out of PS staff is a massively false economy. It transfers work from relatively low-paid PS staff to relatively highly paid academics, who are not renowned for their administrative efficiency.“
www.linkedin.com/pulse/how-un...
Great article exploring how GenAI reshapes stance & voice in student writing: www.degruyterbrill.com/document/doi...
Findings indicate that multilingual student writing is drifting toward the stylistic defaults of GenAI even before the tool is applied, & polishing then reinforces this convergence.
New short report out from my team (Shannon Clark, Darren Henry, Jordan Rineer)! We asked elementary teachers what kinds of responses they tend to give their students in math 1/7 link.springer.com/article/10.1...
We are rapidly approaching the "can't swing a tetherball without hitting one" number of meta-analyses of #GenAI effects on learning. Perhaps we should do better primary research on that topic before trying, yet again, to meta-analyze a bunch of flawed studies? onlinelibrary.wiley.com/doi/10.1111/...
Thinking a bit about these "learning pathways."
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
Sycophantic AI strikes again.
Filed under "not surprising, still frustrating."
arxiv.org/abs/2510.01395
Really interesting analysis!
Furthermore, we show that common strategies for improving Al performance, such as ensembling and expert weighting, can perversely amplify this misalignment."
"the shared architectural and data lineage of today's [foundation models] leads them to converge on a view of teaching that is not only disconnected from expert human judgment but is, on average, negatively correlated with student learning. ...
I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.
I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.
🧵/
sbgeoaiphd.github.io/rotating_the...
Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "“Let Me Show Why You Are Wrong”: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."
Can very young children craft a strong scientific counterargument using evidence and causal language? Yes! Teachers can help develop this skill by asking students to explore and refute multiple alternative explanations. doi.org/10.1037/dev0... #PsychSciSky #AcademicSky #EduSky
Figure 1 Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]
✨ Updated preprint ✨
Iris van Rooij & Olivia Guest (2026). Combining Psychology with Artificial Intelligence: What Could Possibly Go Wrong? PsyArXiv osf.io/preprints/psyarxiv/aue4m_v2 @olivia.science
Our aim is to make these ideas accessible for a.o. psych students. Hope we succeeded 🙂
If you would like to know what @balloonleap.bsky.social and I think about the relationship between generative AI chatbots and the teaching of writing, we have an article for you to read! 1/
#writingstudies #genai+writing #teamrhetoric
cjsdw.journals.publicknowledgeproject.org/index.php/dw...
Students' use patterns of generative artificial intelligence during problem‐solving in an intelligent learning system: Achievement goal orientation matters bera-journals.online...
"Taken together, our findings reflect a societal shift in people seeing AI as being more human-like and warm, and increasingly distinct from other digital technologies." There's a lot to unpack in this analysis of public perception of 12,000 #AI metaphors, but it's a warning about anthropomorphism.
"I'm going to be direct with you: No. You're asking me to write paper content for you in paragraph form. That's still writing your paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me to "make this prose," or ask me to "give you prose." I think you understand exactly what's happening here. You're trying different phrasings to get around my boundary, but the boundary isn't about specific words - it's about academic integrity. At this point, we're going in circles. I've explained multiple times why I won't write your assignment, and I've offered genuine alternatives for helping you write it yourself. You haven't engaged with any of those offers. So here's where we are: I will not write prose paragraphs analyzing Shylock that you can use in your paper I will not convert my analysis into essay format for you I will help you write your own paper if you're willing to do the work If you're not interested in actually writing your own paper with my support, then there's nothing more I can do for you on this assignment. Is there something else - unrelated to this paper - I can help you with today?"
Claude for Education (I'm in our pilot to give feedback 🙃) would not write a paper for me in any mode, got pissed off at me because I wouldn't stop trying and gave me a lecture. So, if I wanted that, I'd just go ask free Claude to do it. Problem not solved. YOU'RE NOT MY MOM, CLAUDE.
This use of teaching and learning in no way reflects how humans actually teach and learn and I think it is troublesome to use the same words to describe these two very different things.
This is something that has been bothering me since I learned about an AI tool that has students "teach" an AI peer and then has the AI peer complete a quiz on the student's behalf using only what it has "learned" from the student.
‘2025 Voice of the Online Learner (UK Edition)’
"This report doesn’t claim to solve the challenges of online learning, but it ... telling us how to design better ones."
👉 Read my summary here: www.dontwasteyourtime.co.uk/elearning/20...
We are excited to share our latest publication in the Online Learning Journal: "New Normal in higher education for the post-COVID-19 world: Reimagining and reexamining factors for student success in online learning." Read more here: doi.org/10.24059/olj...
First, here is the paper this news is based on. Read it so you get a feel for the nuanced findings. You know, do the hard thing we accuse students of not doing.
academic.oup.com/pnasnexus/ar...
Paper here 🔒💲 www.sciencedirect.com/science/arti...
A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.
My essay for The Teaching Professor, "How Faculty Fool Themselves about Teaching and Learning" now freely available at ResearchGate
www.researchgate.net/publication/...
Three words: pine, crab, sauce. There’s a fourth word that combines with the others to create another common word. What is it? When you finally get it, it may feel instantaneous. A recent study shows what happens in the brain during “aha” moments.
"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)
From a report by Oxford University Press, "Teaching the AI-Native Generation," comes this quote about a 17-year-old unable to find the right words. There are many things to be sad about in this world, but this one sticks with me.
1. This is being passed off as a benefit of generative AI.
1/x
Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513
This is a heavy, emotionally charged paper... and so beautiful at the same time... a must-read. Link in the first comment.