This editorial discusses the critical value of human-generated scientific writing in the era of large language models (LLMs), arguing that writing is essential to structured thinking and research comprehension. Writing as Thinking: The act of writing structure's thoughts, sorting research data, and identifying the main message, unlike LLMs which may lack true understanding or accountability. LLM Hallucinations: LLM-generated text requires rigorous verification because these models can produce incorrect information or fake references. Human vs. AI Roles: While LLMs are useful tools for brainstorming, improving grammar, or overcoming writer's block, human researchers must maintain control to engage in the creative task of shaping a compelling narrative.
Writing forces your brain to coordinate memory, reasoning, and meaning-making simultaneously.
Every time you write, you rewire toward clearer thinking. Every time you let an LLM do it, you rewire toward consumption.