LinkedIn post by Alexander Martin Mussgnug: My recent publication has several AI hallucinations. Including my own name (which is not "Anna Maria Mussgnug") And no, this was not because I was carelessly using AI. The paper went through copyediting at the publisher (Cambridge University Press). I had submitted my references with first names abbreviated (Mussgnug, A. M.). CUP house style asks for first names spelled out. It looks like, somewhere in copyediting, an LLM was used to expand them without sanity-checking the outputs. So "Alexander Martin" because "Anna Maria", "Moritz" became "Michael", and so on. None of these changes were clearly flagged in the proofs sent to me, so I did not catch them. The irony is not subtle, given the topic of my paper. The paper argues that we too easily throw overboard established norms and best-practices when we turn to new and flashy AI applications. And this is exactly what happened: a tool dropped into a workflow without the verification practices that have governed copyediting for decades. I work on critical AI studies and the philosophy of science, so I've been pondering this over the past weeks. There's a growing literature on AI in peer review, but copyediting is also part of the publication pipeline, and so far I haven't come across critical scholarship on the (apparently rather careless) use of AI at this step. It seems worth doing. My case is not the only one, many of my colleagues have recently had similar experiences.
Ironically, @universitypress.cambridge.org has long recommended a book known as “Butcher’s” as a guide to CUP house style! Here’s a screenshot of the author's LinkedIn post taken by an understandably disgruntled copyeditor: