Oh well, on the bright side, I'll actually have time to do all the other things I was planning to do in the autumn.
Posts by Inkeri Koskinen
They should make it a lottery. At least at some stage of the application process.
I applied in 2025. In step 1, the reviews found just one fault in my proposal: they doubted the feasibility of the (methodologically ambitious) project. -> B. In 2026 they stopped assessing feasibility in step 1. Now they decided that I can't apply in 2027 because of that B in 2025.
Yes, certainly. The probable impact is just not as obvious and direct as with AI.
Sounds like ingrained anthropocentrism. It's easy to see that AI will have quite an impact on who we are, and from an anthropocentric (and particularly a humanistic) viewpoint that's alarming. Climate change will not change us. It'll "just" destroy the material preconditions of our societies.
There's also the handbook chapter that I submitted in December 2024 and haven't heard about since. My latest polite query to the editor has gone unanswered.
Picture from the PoS submission system, showing that a paper was submitted on Apr 15, 2025, and has been under review since Jan 09, 2026.
Celebrating the one-year anniversary of this submission. (For the first nine months, it was "with editor".)
Thank you!
Can you please add me too? Thanks for the list!
I'm a data scientist @ourworldindata.org and I need help from a botanist or someone local to Kyoto, Japan! 🌸
We present one of the world’s longest climate records: 1,200 years of peak cherry blossom dates in Kyoto.
The researcher who maintained it, Prof. Yasuyuki Aono, sadly passed away last year.
That is of course preferable. I have a trusted copy editor who checks my manuscripts, but I can only afford their services as long as I have external funding. Journals do nothing of the sort, despite the outrageous profits that the large academic publishing houses make.
What exactly is the harm if the answer is sometimes incorrect? A false positive leads to your manuscript including the original idiomatically weird expression. A false negative encoureges you to look for some other expression.
And if instead of asking it to explain the fake idiom (which the LLM is of course happy to do), you ask wether the expression in question is idiomatically ok in academic English, it will tell you that no, and suggest something else (which you can ignore and look for a better expression).
There I prioritise practical solutions to the deeply ingrained linguistic inequalities in academia. At the same time I put considerable effort into teaching my students how to write properly in Finnish – I fear that the development of LLMs will lead to a world where writing becomes an elite skill.
Bad at languages? Quite the contrary, we're good at languages. What I'm arguing it that one can write in a foreign language well, and still find LLMs helpful in language editing. I try to avoid Finnicisms when writing academic text, as there might not be enough context for the reader to understand.
We are talking about integrity and research ethics, and you can't formulate research ethical arguments without some ethical theory. As to the goals and values, there are multiple, and they pull in different directions (something that a non-idealising approach to ethical theory forces one to notice).
Of course you shouldn't use expressions you don't understand. But, to give a simple example, after an evening out, a Finn might ask wether you can throw them home. You probably wouldn't understand. I just checked: Claude is able to tell that the correct translation is "Can you give me a ride home?"
My native language is Finnish. It's not an Indoeuropean language, so despite the fact that I've e.g. translated several books from Indoeuropean languages, I'm never quite sure why some phrasing in English feels right – is it good, or is my Finno-Ugric intuition leading me astray?
Much of my research focuses on the criticism of the institutional structures that press us to publish too much. I genuinely hate that pressure; it is harmful in every way. But at the same time I don't think that the marginalisation of the actual is an acceptable basis for ethical theory.
I fail to see why.
Obviously not, and if you have read my comments in this discussion, you are well aware that I wouldn't think so.
Because they are large language models. The one thing they are good at is telling whether some expression is used a lot or not. (Of course such use requires that the user knows what they are talking about, but as this discussion is about researchers editing their own text, that can be assumed.)
Moreover, I still insist that there is a difference between using LLMs for generating text (I agree that that's a breach of research integrity) and using them for language editing. I fail to see how asking whether some expression is idiomatically aceptable could be a breach of research integrity.
Yes, of course it's problematic. But in the non-ideal world where we live there is very much pressure in academia towards producing more papers, faster. And people who write English on a native level have an advantage in that race.
It's not about needing the LLMs, it's about saving time, and thus reducing the disadvantage compared to native speakers.
Were your "non-expert speakers" academics with doctoral degrees and high level but not native-level English skills writing about their own research? That's the relevant demographic group here and the relevant context of the LLM usage.
And yes, I too have reviewed and very outspokenly rejected an AI-generated paper. But asking Claude to "please suggest five different ways in which I could reduce the word count of this sentence without changing its meaning" is not the same as adding hallucinated references to a manuscript.
If yours becomes common practice, non-native English speakers will not acknowledge their LLM use. LLMs reduce the time that use case 1 takes, reducing the disadvantage compared to native speakers. Use case 2 requires cognition, but the cognition and passive language skills of the user suffice.
Out of curiosity, would you refuse to review a paper where the authors (non-native English speakers) acknowledged the use of ChatGPT or Claude for language editing; e.g. checking whether some expression is idiomatically acceptable, or asking how to reduce the word count of a sentence?
I know the tweet is Al generated when they use " ," before and.
“I will NOT sacrifice the Oxford comma. We've made too many compromises already; too many retreats. They assimilate the em dash and we fall back. They capture ‘not just X but y’ and we fall back. Not again. The line must be drawn here! This far, no further!”