Imagine the following scenario. You are mentoring a student, and they come to you asking you to solve a coding problem. You help them, walking through the solution step by step. They then come back and ask you to solve another problem. And then another. Eventually, you might pause as you recognize that something is going wrong. You realize that your student isn’t learning how to code and is simply learning to rely on your help. You subsequently sit them down and talk about the value of persisting through challenges, of practicing new skills, and what it actually means to learn. This scenario highlights a fundamental aspect of human collaboration. Good collaborators optimize for long-term objectives (Bratman, 1992; Grosz & Kraus, 1996; Balcazar & Keys, 2014; Mattessich & Johnson, 2018). For example, a mentor encourages independent development by adjusting the type of help given and sometimes offering no help at all. In essence, the best collaborators maintain a balance between helping and fostering autonomy; they know when not to help (Koedinger & Aleven, 2007; Van de Pol et al., 2010; Soderstrom & Bjork, 2015). Current AI assistants are a stark contrast to this dynamic. They never refuse to help (unless for safety reasons), and provide instant answers to almost any query, across domains ranging from writing to coding to tutoring (Brynjolfsson et al., 2025; Buc¸inca et al., 2024; OECD, 2026; Shapira et al., 2026). In this sense, AI systems are fundamentally short-term collaborators: extraordinarily helpful in the moment, but indifferent to what that help does to the person receiving it over time
From the study -- this distinction between helpful human mentorship + collaboration vs. what chatbots provide to people by default is really striking: