Advertisement · 728 × 90

Posts by Mark Dingemanse

Welcome to our first post on the Language and Social Interaction page. We are a research group based at the Centre for Language Studies, Radboud University, NL. As our name suggests, our research revolves around topics related to language and social interaction. (1/3)

2 hours ago 9 2 1 0

FWIW I am a non native speaker and I think those arguments are a bit sad. There is tremendous value in finding your own voice and humanity has for most of its history been multilingual bsky.app/profile/ding...

8 hours ago 16 2 1 0

Honestly don't want to offend. If you read that post, or my work, I think you'll find it replies to your Q about coding (e.g., technical debt) and other capabilities. If you prefer it shorter & proofed, here's another way of putting it pure.mpg.de/rest/items/i...

9 hours ago 8 0 0 0

Do you know when you use an LLM to 'write'? Can you be honest and make a choice to not use it when writing a grant? If answers to both are yes, you know how to enforce. Personal responsibility and high standards are a thing, we don't need to go full cop state.

9 hours ago 7 0 1 0
Preview
Don’t seek permission, center values When you're enamoured of a technology and someone points out important ethical challenges, a typical reflex is to seek permission:…

It's okay, you don't need my permission ideophone.org/dont-seek-pe...

As for what I do or do not realize about LLMs, feel free to peruse my work doi.org/10.5281/zeno... or that of colleagues like www.sciencedirect.com/science/arti...

9 hours ago 25 4 2 0
Mark Dingemanse (@dingemansemark@scholar.social) Something that made me happy this week, shared with permission: a junior researcher in our faculty asked for advice on whether or not to use a GenAI service for cross-validating his hand-coded systema...

I wrote guidance on GenAI and research integrity, adopted by my faculty. They set high standards, but reasonable ones. We don't worry about how to enforce them; we use them to empower our researchers. Example of how this works: scholar.social/@dingemansem...

2/2

18 hours ago 1 0 0 0
Preview
Don’t seek permission, center values When you're enamoured of a technology and someone points out important ethical challenges, a typical reflex is to seek permission:…

So lowest common denominator it is? The enforcement trap is like asking for permission: it centers technology over values. I honestly think we can do better ideophone.org/dont-seek-pe...

1/2

18 hours ago 2 0 1 0

(We agree, obviously)
bsky.app/profile/ding...

18 hours ago 2 0 1 0

And it is not presented by me or OP as the main problem so please don't derail

18 hours ago 2 0 1 0
Advertisement

But where does this train of thought lead you? You bring it in opposition to a simple statement that standards matter and can be set. Asking 'but how enforce' is cop culture, I think we can do better. Surely we shouldn't settle for some lowest common denominator just because folks can be dishonest?

19 hours ago 3 0 2 0

the 'how' is literally specified in OP: prohibit it and let applicants sign a declaration that they haven't used it

what does it say about you (or your opinions of llm users) that your first worry is about enforcement? should we lower our standards just because dishonesty is possible?

1 day ago 38 3 4 0

if 'too many submissions' is a problem, this simple intervention would filter out the subset of applications least deserving of peer reviewer time & public money — after all, why should reviewers bother to read (and why would ERC bother to fund) stuff that applicants didn't bother to write?

1 day ago 16 4 0 0

A proofed version of our commentary is now available here pure.mpg.de/rest/items/i...

Also, the authors have posted their response to the full set of 25 commentaries bsky.app/profile/kmah...

1 day ago 5 4 0 0

Love this clear-headed idea for ERC — @erc.europa.eu please take note. Innovative work requires independent minds. A policy like this would send the signal that the funder values clear thinking over automated thoughtlessness, quality over quantity, earnest novel work over mediocre rehashings.

1 day ago 34 8 1 0

Ode to the original language model, or:
Give me literally Anything* instead of Large Language Models (LLMs)
*(no predictive coding either!)

By Lady Byronadrea LLMartin 1/n

6 months ago 67 18 2 5
Poster van de Kladderadatsch Deltatour. Een aantal foto's van Kladderadatsch en vier concertlokaties: Goese Sas, Goes, Veere, Zierikzee.

Poster van de Kladderadatsch Deltatour. Een aantal foto's van Kladderadatsch en vier concertlokaties: Goese Sas, Goes, Veere, Zierikzee.

Lijst van optredens Kladderadatsch, van https://www.kladderadatsch.nl/


23 apr 2026
Deltatour: Zeilzwerftocht met de Vrijbuiter
19:00 
Goes, Zierikzee, en omgeving

05 mei 2026
Bevrijdingsdag @ De Kaaij
20:00  De Kaaij

09 mei 2026
Plufabriek
19:15 - 19:45
Paraplufabrieken

29 mei 2026
Heksennacht 2026
20:00 
Nijmegen centrum

16 jun 2026
Straatmensen: Lustrum 25 jaar
19:30 
De Klinker

Lijst van optredens Kladderadatsch, van https://www.kladderadatsch.nl/ 23 apr 2026 Deltatour: Zeilzwerftocht met de Vrijbuiter 19:00 Goes, Zierikzee, en omgeving 05 mei 2026 Bevrijdingsdag @ De Kaaij 20:00 De Kaaij 09 mei 2026 Plufabriek 19:15 - 19:45 Paraplufabrieken 29 mei 2026 Heksennacht 2026 20:00 Nijmegen centrum 16 jun 2026 Straatmensen: Lustrum 25 jaar 19:30 De Klinker

Kladderadatsch gaat op zeilzwerftocht door Zeeland! Op de tjalk de Vrijbuiter koersen we door de Zeeuwse delta en geven we optredens in o.a. Goese Sas, Goes, Veere, en Zierikzee, van vr 24 tot ma 27 april

Dit is het spetterende begin van een druk optreedseizoen — zie www.kladderadatsch.nl

2 days ago 0 0 0 0
Advertisement

Published!

What Does ‘Human-Centred AI’ Mean? doi.org/10.3390/bs16...

Thank you to Andy Wills www.andywills.info for inviting me to his SI (Advanced Studies in Human-Centred AI) — and furthermore, for being up for me completely disagreeing with so many mainstream views on HCAI! Great reviews too.

2 days ago 51 15 4 1

Horrendous. @erc.europa.eu this is a terrible decision that is adding insult to injury. Modernize your system instead of making arbitrary and careless decisions. This is making things worse.

2 days ago 0 0 1 0

honestly how I read it as a kid

there's even a pun in Dutch, 'disnep' - di's nep, 'this is fake'

3 days ago 1 0 0 0

sorry to hear that, sounds depressing indeed 🫤

3 days ago 0 0 0 0
Preview
Bringing about the singularity by giving up thinking For Kurzweil, singularity is the point at which machine intelligence would be more powerful than human intelligence. I'm coming to…

This is how the singularity will arrive

As more and more people are happy to engage with mindless computational artefacts designed to be maximally persuasive, the result is nothing less than a massive denial of service attack on our collective intelligence.
ideophone.org/bringing-abo...

3 days ago 3 0 0 0

Source: arxiv.org/pdf/2604.09501

3 days ago 1 0 0 0
A four-quadrant diagram discussing arguments and research programs related to language models. Top left: "The String Statistics Strawman" with a faulty argument that LMs cannot learn language and refutations. Top right: "The "As Good As It Gets" Assumption" with a faulty argument that LMs are useless for serious language science and refutations. Bottom left: "The Statistical Structure Steelman Research Program" with key research questions on processing, learning, structure, and real patterns. Bottom right: "The "Yes, And" Research Program" listing shortcomings of current LMs and corresponding research opportunities.

A four-quadrant diagram discussing arguments and research programs related to language models. Top left: "The String Statistics Strawman" with a faulty argument that LMs cannot learn language and refutations. Top right: "The "As Good As It Gets" Assumption" with a faulty argument that LMs are useless for serious language science and refutations. Bottom left: "The Statistical Structure Steelman Research Program" with key research questions on processing, learning, structure, and real patterns. Bottom right: "The "Yes, And" Research Program" listing shortcomings of current LMs and corresponding research opportunities.

Generative AI (Claude and ChatGPT) was used for  brainstorming, summarizing, and finding relevant literature

Generative AI (Claude and ChatGPT) was used for brainstorming, summarizing, and finding relevant literature

do people generally find this signature llm style of oversimplistic dichotomies compelling? still?

because I see sth like Figure 1, with the cute monikers, the four quadrants, and the glib summaries, and my GenAI dosimeter ☢️ goes wild

I am not surprised to see this line in the acknowledgements

3 days ago 12 2 4 0
Advertisement

May I introduce to you some really impressive Libyco-Berber monuments? In Bordj Hajar, close to Chemtou in Tunisia, three fragments of massive obelisks have been found. They have been reconstructed in the Chemtou museum (thanks, wikipedia!). Their original height was probably around 3.5 meters 🤯

4 days ago 34 7 2 1

Yup. This wastefulness is the reason that I haven't bothered applying to ERC since 2018. Funding is a lottery, and ERC is one where they make you craft your ticket with your own blood sweat and tears only to throw it away unassessed. ERC should follow modern funders and move to light pre-proposals

4 days ago 3 0 1 1
Preview
AI slop and the destruction of knowledge Cite as: van Rooij, I. (2025) AI slop and the destruction of knowledge. This week I was looking for info on what cognitive scientists mean when they speak of ‘domain-general’ cognition. I was curio…

Also see: irisvanrooijcogsci.com/2025/08/12/a... by @irisvanrooij.bsky.social , making similar arguments against Elsevier's embrace of GenAI

We need not be surprised that big publishers love machines of mediocrity, but we can speak out as experts

4 days ago 9 3 0 0

They undermine preservation by diluting the "body of scholarly content" with credible-sounding but unreviewed AI slop whose provenance is murky and which will inevitably erode the integrity of ACM's scholarly content.

If you're an ACM author, please speak out!
codeberg.org/mdingemanse/...

4 days ago 7 3 1 0

Second, it endangers ACM DL's goal to "bring together ACM’s ... scholarly content with tools that support research ... and long-term preservation": AI summaries actively work against supporting research by automating thoughtlessness, and they undermine the long-term preservation of ACM work

>

4 days ago 3 0 1 0

First, it endangers ACM DL's goal to serve as an "authoritative research, discovery, and publishing platform for computing and information technology", because AI summaries can never be authoritative, and instead bring at best a veneer of credibility. This is how authority is eroded & undermined.

>

4 days ago 5 0 1 0

🤡 'bUt theY are In ThEir RigHTs'
legally, possibly
morally and academically, definitely not

In fact, the generation of AI 'summaries' and 'podcasts' conflicts directly with the lofty goals of the ACM Digital Library

>

4 days ago 1 0 1 0
Advertisement