Welcome to our first post on the Language and Social Interaction page. We are a research group based at the Centre for Language Studies, Radboud University, NL. As our name suggests, our research revolves around topics related to language and social interaction. (1/3)
Posts by Mark Dingemanse
FWIW I am a non native speaker and I think those arguments are a bit sad. There is tremendous value in finding your own voice and humanity has for most of its history been multilingual bsky.app/profile/ding...
Honestly don't want to offend. If you read that post, or my work, I think you'll find it replies to your Q about coding (e.g., technical debt) and other capabilities. If you prefer it shorter & proofed, here's another way of putting it pure.mpg.de/rest/items/i...
Do you know when you use an LLM to 'write'? Can you be honest and make a choice to not use it when writing a grant? If answers to both are yes, you know how to enforce. Personal responsibility and high standards are a thing, we don't need to go full cop state.
It's okay, you don't need my permission ideophone.org/dont-seek-pe...
As for what I do or do not realize about LLMs, feel free to peruse my work doi.org/10.5281/zeno... or that of colleagues like www.sciencedirect.com/science/arti...
I wrote guidance on GenAI and research integrity, adopted by my faculty. They set high standards, but reasonable ones. We don't worry about how to enforce them; we use them to empower our researchers. Example of how this works: scholar.social/@dingemansem...
2/2
So lowest common denominator it is? The enforcement trap is like asking for permission: it centers technology over values. I honestly think we can do better ideophone.org/dont-seek-pe...
1/2
(We agree, obviously)
bsky.app/profile/ding...
And it is not presented by me or OP as the main problem so please don't derail
But where does this train of thought lead you? You bring it in opposition to a simple statement that standards matter and can be set. Asking 'but how enforce' is cop culture, I think we can do better. Surely we shouldn't settle for some lowest common denominator just because folks can be dishonest?
the 'how' is literally specified in OP: prohibit it and let applicants sign a declaration that they haven't used it
what does it say about you (or your opinions of llm users) that your first worry is about enforcement? should we lower our standards just because dishonesty is possible?
if 'too many submissions' is a problem, this simple intervention would filter out the subset of applications least deserving of peer reviewer time & public money — after all, why should reviewers bother to read (and why would ERC bother to fund) stuff that applicants didn't bother to write?
A proofed version of our commentary is now available here pure.mpg.de/rest/items/i...
Also, the authors have posted their response to the full set of 25 commentaries bsky.app/profile/kmah...
Love this clear-headed idea for ERC — @erc.europa.eu please take note. Innovative work requires independent minds. A policy like this would send the signal that the funder values clear thinking over automated thoughtlessness, quality over quantity, earnest novel work over mediocre rehashings.
Ode to the original language model, or:
Give me literally Anything* instead of Large Language Models (LLMs)
*(no predictive coding either!)
By Lady Byronadrea LLMartin 1/n
Poster van de Kladderadatsch Deltatour. Een aantal foto's van Kladderadatsch en vier concertlokaties: Goese Sas, Goes, Veere, Zierikzee.
Lijst van optredens Kladderadatsch, van https://www.kladderadatsch.nl/ 23 apr 2026 Deltatour: Zeilzwerftocht met de Vrijbuiter 19:00 Goes, Zierikzee, en omgeving 05 mei 2026 Bevrijdingsdag @ De Kaaij 20:00 De Kaaij 09 mei 2026 Plufabriek 19:15 - 19:45 Paraplufabrieken 29 mei 2026 Heksennacht 2026 20:00 Nijmegen centrum 16 jun 2026 Straatmensen: Lustrum 25 jaar 19:30 De Klinker
Kladderadatsch gaat op zeilzwerftocht door Zeeland! Op de tjalk de Vrijbuiter koersen we door de Zeeuwse delta en geven we optredens in o.a. Goese Sas, Goes, Veere, en Zierikzee, van vr 24 tot ma 27 april
Dit is het spetterende begin van een druk optreedseizoen — zie www.kladderadatsch.nl
Published!
What Does ‘Human-Centred AI’ Mean? doi.org/10.3390/bs16...
Thank you to Andy Wills www.andywills.info for inviting me to his SI (Advanced Studies in Human-Centred AI) — and furthermore, for being up for me completely disagreeing with so many mainstream views on HCAI! Great reviews too.
Horrendous. @erc.europa.eu this is a terrible decision that is adding insult to injury. Modernize your system instead of making arbitrary and careless decisions. This is making things worse.
honestly how I read it as a kid
there's even a pun in Dutch, 'disnep' - di's nep, 'this is fake'
sorry to hear that, sounds depressing indeed 🫤
This is how the singularity will arrive
As more and more people are happy to engage with mindless computational artefacts designed to be maximally persuasive, the result is nothing less than a massive denial of service attack on our collective intelligence.
ideophone.org/bringing-abo...
Source: arxiv.org/pdf/2604.09501
A four-quadrant diagram discussing arguments and research programs related to language models. Top left: "The String Statistics Strawman" with a faulty argument that LMs cannot learn language and refutations. Top right: "The "As Good As It Gets" Assumption" with a faulty argument that LMs are useless for serious language science and refutations. Bottom left: "The Statistical Structure Steelman Research Program" with key research questions on processing, learning, structure, and real patterns. Bottom right: "The "Yes, And" Research Program" listing shortcomings of current LMs and corresponding research opportunities.
Generative AI (Claude and ChatGPT) was used for brainstorming, summarizing, and finding relevant literature
do people generally find this signature llm style of oversimplistic dichotomies compelling? still?
because I see sth like Figure 1, with the cute monikers, the four quadrants, and the glib summaries, and my GenAI dosimeter ☢️ goes wild
I am not surprised to see this line in the acknowledgements
May I introduce to you some really impressive Libyco-Berber monuments? In Bordj Hajar, close to Chemtou in Tunisia, three fragments of massive obelisks have been found. They have been reconstructed in the Chemtou museum (thanks, wikipedia!). Their original height was probably around 3.5 meters 🤯
Yup. This wastefulness is the reason that I haven't bothered applying to ERC since 2018. Funding is a lottery, and ERC is one where they make you craft your ticket with your own blood sweat and tears only to throw it away unassessed. ERC should follow modern funders and move to light pre-proposals
Also see: irisvanrooijcogsci.com/2025/08/12/a... by @irisvanrooij.bsky.social , making similar arguments against Elsevier's embrace of GenAI
We need not be surprised that big publishers love machines of mediocrity, but we can speak out as experts
They undermine preservation by diluting the "body of scholarly content" with credible-sounding but unreviewed AI slop whose provenance is murky and which will inevitably erode the integrity of ACM's scholarly content.
If you're an ACM author, please speak out!
codeberg.org/mdingemanse/...
Second, it endangers ACM DL's goal to "bring together ACM’s ... scholarly content with tools that support research ... and long-term preservation": AI summaries actively work against supporting research by automating thoughtlessness, and they undermine the long-term preservation of ACM work
>
First, it endangers ACM DL's goal to serve as an "authoritative research, discovery, and publishing platform for computing and information technology", because AI summaries can never be authoritative, and instead bring at best a veneer of credibility. This is how authority is eroded & undermined.
>
🤡 'bUt theY are In ThEir RigHTs'
legally, possibly
morally and academically, definitely not
In fact, the generation of AI 'summaries' and 'podcasts' conflicts directly with the lofty goals of the ACM Digital Library
>