Advertisement · 728 × 90

Posts by Stephanie Boragina

I hope this helps people move away from process tracking. it feels worth pointing out that anyone can manually type out LLM outputs.

1 week ago 9 5 1 0
Preview
How universities have been reorganised into incompetence The people who administer the work of universities are known as professional services staff, or PS for short. As if their work was some sort of additional postscript to what universities do.

“So the hollowing out of PS staff is a massively false economy. It transfers work from relatively low-paid PS staff to relatively highly paid academics, who are not renowned for their administrative efficiency.“
www.linkedin.com/pulse/how-un...

1 month ago 4 1 0 1
Preview
Beyond Polishing: The Compounding Dynamic of GenAI in Academic Writing Generative AI can reshape stance and voice in multilingual student writing, with effects that appear to be shifting over time. Using a random sample of English L2 master’s-level assignments at a UK un...

Great article exploring how GenAI reshapes stance & voice in student writing: www.degruyterbrill.com/document/doi...

Findings indicate that multilingual student writing is drifting toward the stylistic defaults of GenAI even before the tool is applied, & polishing then reinforces this convergence.

1 month ago 2 1 0 0

New short report out from my team (Shannon Clark, Darren Henry, Jordan Rineer)! We asked elementary teachers what kinds of responses they tend to give their students in math 1/7 link.springer.com/article/10.1...

1 month ago 4 3 1 1
Preview
ChatGPT in Education: An Effect in Search of a Cause Background As researchers rush to investigate the potential of AI tools like ChatGPT to enhance learning, well-documented pitfalls threaten the validity of this emerging research. Issues of media co.....

We are rapidly approaching the "can't swing a tetherball without hitting one" number of meta-analyses of #GenAI effects on learning. Perhaps we should do better primary research on that topic before trying, yet again, to meta-analyze a bunch of flawed studies? onlinelibrary.wiley.com/doi/10.1111/...

1 month ago 11 1 1 0

Thinking a bit about these "learning pathways."

1 month ago 2 0 0 0
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

Sycophantic AI strikes again.
Filed under "not surprising, still frustrating."
arxiv.org/abs/2510.01395

1 month ago 17 9 1 1

Really interesting analysis!

1 month ago 0 0 0 0

Furthermore, we show that common strategies for improving Al performance, such as ensembling and expert weighting, can perversely amplify this misalignment."

1 month ago 1 0 0 0
Advertisement

"the shared architectural and data lineage of today's [foundation models] leads them to converge on a view of teaching that is not only disconnected from expert human judgment but is, on average, negatively correlated with student learning. ...

1 month ago 0 0 1 0
Amplifiers of Epistemic Posture Essays and writing on AI

I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.

I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.

🧵/

sbgeoaiphd.github.io/rotating_the...

1 month ago 298 123 8 34
Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "“Let Me Show Why You Are Wrong”: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."

Screenshot of the title page of an article published in the journal "Developmental Psychology" titled: "“Let Me Show Why You Are Wrong”: The Origins of Scientific Argumentation, Its Development, and Cognitive Predictors."

Can very young children craft a strong scientific counterargument using evidence and causal language? Yes! Teachers can help develop this skill by asking students to explore and refute multiple alternative explanations. doi.org/10.1037/dev0... #PsychSciSky #AcademicSky #EduSky

2 months ago 29 13 0 0
Figure 1
Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is
created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]

Figure 1 Illustration of why AI systems cannot realistically scale to human cognition within the foreseeable future: (b) Human cognitive capacities (such as reasoning, communication, problem solving, learning, concept formation, planning etc.) can handle unbounded situations across many domains, ranging from simple to complex. (a) Engineers create AI systems using machine learning from human data. (d) In an attempt to approximate human cognition a lot of data is consumed. (c) Making AI systems that approximate human cognition is intractable (van Rooij, Guest, et al., 2024), i.e., the required resources (e.g. time, data) grows prohibitively fast as input domains get more complex, leading to diminishing returns. (a) Any existing AI system is created in limited time (hours, months or years, not millennia or eons). Therefore, existing AI systems cannot realistically have the domain-general cognitive capacities that humans have. [Made with elements from freepik.com.]

✨ Updated preprint ✨

Iris van Rooij & Olivia Guest (2026). Combining Psychology with Artificial Intelligence: What Could Possibly Go Wrong? PsyArXiv osf.io/preprints/psyarxiv/aue4m_v2 @olivia.science

Our aim is to make these ideas accessible for a.o. psych students. Hope we succeeded 🙂

3 months ago 165 68 6 11
View of Generative AI + Socio-Rhetorical Views of Writing

If you would like to know what @balloonleap.bsky.social and I think about the relationship between generative AI chatbots and the teaching of writing, we have an article for you to read! 1/

#writingstudies #genai+writing #teamrhetoric

cjsdw.journals.publicknowledgeproject.org/index.php/dw...

1 month ago 23 10 1 0

Students' use patterns of generative artificial intelligence during problem‐solving in an intelligent learning system: Achievement goal orientation matters bera-journals.online...

1 month ago 2 1 0 0

"Taken together, our findings reflect a societal shift in people seeing AI as being more human-like and warm, and increasingly distinct from other digital technologies." There's a lot to unpack in this analysis of public perception of 12,000 #AI metaphors, but it's a warning about anthropomorphism.

1 month ago 1 3 0 0
Advertisement
"I'm going to be direct with you: No.
You're asking me to write paper content for you in paragraph form. That's still writing your
paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me
to "make this prose," or ask me to "give you prose."
I think you understand exactly what's happening here. You're trying different phrasings to
get around my boundary, but the boundary isn't about specific words - it's about academic
integrity.
At this point, we're going in circles. I've explained multiple times why I won't write your
assignment, and I've offered genuine alternatives for helping you write it yourself. You
haven't engaged with any of those offers.
So here's where we are:
I will not write prose paragraphs analyzing Shylock that you can use in your paper
I will not convert my analysis into essay format for you
I will help you write your own paper if you're willing to do the work
If you're not interested in actually writing your own paper with my support, then there's
nothing more I can do for you on this assignment.
Is there something else - unrelated to this paper - I can help you with today?"

"I'm going to be direct with you: No. You're asking me to write paper content for you in paragraph form. That's still writing your paper, just phrased differently. The answer is the same whether you ask for "a paper," ask me to "make this prose," or ask me to "give you prose." I think you understand exactly what's happening here. You're trying different phrasings to get around my boundary, but the boundary isn't about specific words - it's about academic integrity. At this point, we're going in circles. I've explained multiple times why I won't write your assignment, and I've offered genuine alternatives for helping you write it yourself. You haven't engaged with any of those offers. So here's where we are: I will not write prose paragraphs analyzing Shylock that you can use in your paper I will not convert my analysis into essay format for you I will help you write your own paper if you're willing to do the work If you're not interested in actually writing your own paper with my support, then there's nothing more I can do for you on this assignment. Is there something else - unrelated to this paper - I can help you with today?"

Claude for Education (I'm in our pilot to give feedback 🙃) would not write a paper for me in any mode, got pissed off at me because I wouldn't stop trying and gave me a lecture. So, if I wanted that, I'd just go ask free Claude to do it. Problem not solved. YOU'RE NOT MY MOM, CLAUDE.

2 months ago 36 4 6 4

This use of teaching and learning in no way reflects how humans actually teach and learn and I think it is troublesome to use the same words to describe these two very different things.

2 months ago 0 0 0 0

This is something that has been bothering me since I learned about an AI tool that has students "teach" an AI peer and then has the AI peer complete a quiz on the student's behalf using only what it has "learned" from the student.

2 months ago 0 0 1 0
Preview
‘2025 Voice of the Online Learner (UK Edition)’ This week marked the release of ‘Voice of the Online Learner (UK Edition)’. This is the first time UK-specific insight has been published, being based on data from the US until now. It’…

‘2025 Voice of the Online Learner (UK Edition)’
"This report doesn’t claim to solve the challenges of online learning, but it ... telling us how to design better ones."
👉 Read my summary here: www.dontwasteyourtime.co.uk/elearning/20...

5 months ago 2 2 0 0
Post image

We are excited to share our latest publication in the Online Learning Journal: "New Normal in higher education for the post-COVID-19 world: Reimagining and reexamining factors for student success in online learning." Read more here: doi.org/10.24059/olj...

5 months ago 2 1 1 0
Preview
Experimental evidence of the effects of large language models versus web search on depth of learning Abstract. The effects of using large language models (LLMs) versus traditional web search on depth of learning are explored. A theory is proposed that when

First, here is the paper this news is based on. Read it so you get a feel for the nuanced findings. You know, do the hard thing we accuse students of not doing.

academic.oup.com/pnasnexus/ar...

5 months ago 1 1 1 0

Paper here 🔒💲 www.sciencedirect.com/science/arti...

4 months ago 3 1 0 0
Post image

A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.

4 months ago 37 18 2 5
Preview
(PDF) How Faculty Fool Themselves about Teaching and Learning PDF | Last month I wrote about how students fool themselves into thinking they have learned concepts when they really haven't. This month I focus on how... | Find, read and cite all the research you n...

My essay for The Teaching Professor, "How Faculty Fool Themselves about Teaching and Learning" now freely available at ResearchGate
www.researchgate.net/publication/...

5 months ago 7 5 0 0
Advertisement
Preview
How Your Brain Creates ‘Aha’ Moments and Why They Stick | Quanta Magazine A sudden flash of insight is a product of your brain. Neuroscientists track the neural activity underlying an “aha” and how it might boost memory.

Three words: pine, crab, sauce. There’s a fourth word that combines with the others to create another common word. What is it? When you finally get it, it may feel instantaneous. A recent study shows what happens in the brain during “aha” moments.

5 months ago 22 6 0 3
"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)

"It takes what I say/think and puts it in an order which makes it easier for others to understand." Male student, aged 17 (talking about generative AI)

From a report by Oxford University Press, "Teaching the AI-Native Generation," comes this quote about a 17-year-old unable to find the right words. There are many things to be sad about in this world, but this one sticks with me.

1. This is being passed off as a benefit of generative AI.

1/x

5 months ago 59 15 1 3
Post image Post image Post image

Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513

5 months ago 76 12 4 4
Post image

This is a heavy, emotionally charged paper... and so beautiful at the same time... a must-read. Link in the first comment.

5 months ago 2 2 1 0
Preview
Process mining measures students’ help-seeking transitions when completing assignments in an online learning and assessment platform - Metacognition and Learning The shift towards active pedagogies in higher education that emphasize students’ engagement in their own learning in and outside of the classroom has increased the ubiquity of online learning and asse...

Curious about how help-seeking behaviors relate to learning in an online learning environment? Then check out this open access (!) article authored by Chenyu Hou, featuring the outstanding mentoring of @shelbikuhlmann.bsky.social! doi.org/10.1007/s114...

5 months ago 7 3 0 0