Advertisement · 728 × 90
#
Hashtag
#cognitivesurrender
Advertisement · 728 × 90

I see a lot of posts talking about #FreeIntelligence with #AI. But is it really free if it suppresses #System1 and circumvents #System2? Will it lead to #CognitiveSurrender (Shaw & Nave 2026)? There is always a cost.

1 0 0 0
Preview
Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong We're shockingly prone to "cognitive surrender." The post Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong appeared first on Futurism.

#CognitiveSurrender is when people give up their own thinking to follow #ChatGPT's recommendations 90%+ of the time when it’s correct and also still followed its advice ~80% of the time when it was completely wrong​.

futurism.com/artificial-intelligence/...

1 3 0 1
Post image

Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender via @SSRN papers.ssrn.com/sol3/papers.... #EduSky #EduSkyAI #TLSky #EdTech #AIinEducation #aisky #ai #criticalthinking #cognitivesurrender

2 1 0 0
Preview
Human Judgment in AI-Driven Workflows: Cognitive Sovereignty Over Surrender | Helen Edwards posted on the topic | LinkedIn The agentic org has grabbed the corporate consciousness. AI agents running workflows, handing tasks to other agents, humans overseeing the whole thing from above. I've spent three years studying how professional expertise and judgment change with Gen AI and I can tell you there is no shortcut here. If you want expertise, you have to stay meaningfully engaged. Our latest research (which we'll publish soon) shows that people who integrate AI into their reasoning — who think with it, argue with it, stay inside the logic — maintain their professional judgment and get more capable over time. We call this cognitive sovereignty. People who get moved into the review seat — check AI's output, approve it, forward it — lose their edge. Steadily and often without noticing. We call this cognitive surrender. I'm no stranger to this. I had years as a technology executive in critical infrastructure — manufacturing control, power grids, many control and decision support technologies, the kind of environments where automation decisions have real, physical world, immediate consequences. The hardest part of automation was keeping the people sharp. When you automate the routine, the humans who remain need to be more expert, not less. And their skills atrophy fast when they stop doing the work that built those skills. This is well-known paradox, humans are just not well suited to monitoring. This used to be a problem for control rooms and cockpits. Now it's everywhere. It's in the process of putting your board papers together. Your quarterly analysis. Your client recommendations. Your legal review. Every time someone's job goes from "do the thinking" to "check what AI thought," you're building the same failure pattern that aviation has been fighting for forty years. This part drives me crazy about the agentic conversation. The word "agentic" is always attached to the AI. Agentic workflows. Agentic systems. The agency belongs to the machine. I think we have the unit of agency backwards. I think we should be thinking about an agentic organization where the humans have agency in their relationship with AI, not the AI having the agency. Are they inside the reasoning? Can they challenge it? Are they building capability or watching it drain away in the name of efficiency? Currently the thinking is: design agents for maximum autonomy then design jobs around monitoring agents. Our research says that produces the worst outcomes. The alternative is to design agents for maximum collaboration then design jobs around reasoning with agents. Keep people where human judgment actually works — inside the cognitive process, not supervising from outside it. The agentic org needs humans who can still think not just more autonomous AI agents sending validation back to passive people. #ai #aiagents #cognitivesovereignty #stayhuman #futureofwork #agenticorg #agenticai | 21 comments on LinkedIn

@philipncohen.com

I think this is the “cognitive sovereignty vs cognitive surrender” divide described by Helen Edwards on #LinkedIn

www.linkedin.com/posts/helenedwardskiwi_a...

#CognitiveSovereignty #CognitiveSurrender

2 2 0 0