Advertisement · 728 × 90

Posts by Moritz Kriegleder

cant wait to discuss conscious experience in such a stimulating space as the @louisianamuseum.bsky.social!

2 weeks ago 5 0 0 0

will read it and get back to you!

4 weeks ago 1 0 0 0

awesome, I was looking forward to that since the talk at ASSC!

4 weeks ago 1 0 1 0

Excited to co-organize the next edition of @moc7.bsky.social in Denmark!

1 month ago 6 2 0 0
Preview
ECogS 2026 What does embodied cognitive science have to say about recent developments in AI, and what might these developments reveal about the nature and limits of embodied cognition? Under the theme “From Embo...

Interesting lineup for the next ECogS conference in Okinawa!
www.oist.jp/conference/e...

1 month ago 1 0 0 0

taking the simulation hypothesis seriously is just creationism for tech people.

1 month ago 10 3 0 0

awesome, will use that for a new draft on indeterminism in LLMs!

2 months ago 1 1 1 0
table 1 extract from Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

table 1 extract from Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics,

5 Ghostwriter in the Machine A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that essentially function as lossy2 content-addressable memory: when input is given, the output generated by the model is text that stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs an automated version of what is known as mosaic or patchwork plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This makes the automated flagging of plagiarism unlikely, which is also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics,

In addition, who is held accountable if nobody with intent
authored the text? Because while the original data fed into the
system is certainly written with goals, messages, and audiences in
mind jumbling this into ad-libbed word salad removes authorial
intent (Bender et al., 2021). So do the companies who own the
chatbot own the text or do the original authors? These questions
denote legal battles, which are being currently fought in the public eye and which affect all of us in all roles, not just as academics
(Creamer, 2025; Knibbs, 2024; Reuters, 2025). Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!

In addition, who is held accountable if nobody with intent authored the text? Because while the original data fed into the system is certainly written with goals, messages, and audiences in mind jumbling this into ad-libbed word salad removes authorial intent (Bender et al., 2021). So do the companies who own the chatbot own the text or do the original authors? These questions denote legal battles, which are being currently fought in the public eye and which affect all of us in all roles, not just as academics (Creamer, 2025; Knibbs, 2024; Reuters, 2025). Either way, even if the courts decide in the favour of companies, we should not allow these companies with vested interests to write our papers (Fisher et al., 2025), or to filter what we include in our papers. Because it is not the case that we only operate based on legal precedents, but also on our own ethical values and scientific integrity codes (ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to protect, as with previous crises and in general, the literature from pollution. In other words, the same issues as in previous sections play out here, where essentially now every paper produced using chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company who owns the bot (see Table 2). Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic in this case, as the creator of the bot’s output is misplaced. The input does not cause the output in an authorial sense, much like input to a library search engine does not cause relevant articles and books to be written (Guest, 2025). The respective authors wrote those, not the search query!

Third, the peculiar idea that somehow we don't need to read, write, or perform literature reviews anymore; popping up like a satanic mushroom in almost all so-called OK uses of LLMs.

Companies writing our papers via their chatbots is not scientific at all. See section 5: doi.org/10.31234/osf...

7/

6 months ago 111 30 2 9

but I was asking for other approaches that use POMDPs

3 months ago 0 0 0 0
Advertisement

Mazviita chirimuuta in an interview on her book the brain abstracted outsidecolour.net/wp-content/u...

3 months ago 2 1 0 0

"The more important issue [...] is not that physics is more ‘mature’ but that it deals with inherently
simpler objects. [...] the brain and nervous system are exquisitely sensitive to surroundings – as they must be, for without this we could not perceive and respond to the world around us."

3 months ago 7 2 1 0

i love reading open peer review of papers. its like getting a deep dive into methodological issues in the format of academia gossip.

3 months ago 2 0 0 0

how is that connected to my question?

3 months ago 0 0 1 0
Preview
Closing the loop: how semantic closure enables open-ended evolution? Abstract. This study explores the evolutionary emergence of semantic closure—the self-referential mechanism through which symbols actively construct and in

Happy to share my new paper w/ @cgershen.bsky.social, just published at @royalsocietypublishing.org Interface!

Open Access🔓: royalsocietypublishing.org/rsif/article...

Instead of proposing a new theory, we offer a synthesis in theoretical biology. Want to know more? Read the full thread./1 👇🧵

4 months ago 25 12 1 0

super interesting stuff, you should discuss it with @yoginho.spore.social.ap.brid.gy !

3 months ago 1 0 1 0

you can check out the full comment and the target article here:
authors.elsevier.com/a/1mDV85bD-s...

4 months ago 3 0 0 0
Post image

new commentary on embodiment in information systems with Tom Froese! we discuss Pitti et al.'s framework to formalise the embodied mind with information theory and sketch a way to cover the complementary perspective of the lived body and subjective experience as well

4 months ago 11 1 2 0

i know a couple of people working on AI coding interview data. do you think this could help with the criticism of interviewer bias? i am bit afraid that this will just introduce unexplainable LLM bias

4 months ago 3 0 1 0
Advertisement

could point me to some intro to neuroanthropology? sounds interesting!

4 months ago 2 0 1 0

yes, the seizure studies also got me interested in the methods at first. I think its a good motivation and I don't think we should adapt it to neuroscience, but a methodological pluralism can only be productive when there is some cross-perspectival assessment.

4 months ago 1 0 1 0

I think a way to get them to move beyond simple questionnaires would be a clear example of how it could be useful to them. do you have any favorite studies that used micropheno that could convince them to do the work?

4 months ago 1 0 3 0

i am very interested in experimental phenomenological methods but i am not sure what it takes to convince neuroscientists to pick it up. most neuroscientists i talked to seem to be strong reductionists that stick to exclusively third person methods

4 months ago 1 0 1 0

yeah but since varelas proposal 30 years ago not many neuroscientists have picked them up

4 months ago 1 0 1 0

great intersection, a while ago I shared a plot of how interest in phenomenology soared over the last two decades. we finally need that convergence of neuro and pheno methods!

4 months ago 2 0 1 0

you had good intuition as well ;)

4 months ago 1 0 0 0
Preview
Map — State of Neuroscience 2025: Trends & Breakthroughs | The Transmitter A comprehensive look at major trends shaping the neuroscience landscape in 2025

from the amazing neuroscience database of @thetransmitter.bsky.social
stateofneuroscience.thetransmitter.org/map/

4 months ago 5 0 0 0
Advertisement
Post image

I started my PhD on the philosophy and neuroscience of consciousness in 2022. Even though people advised against it, I think it was good timing.

4 months ago 19 1 3 1

imagine there would be a way to experimentally study subjective qualities 🤔

4 months ago 2 0 0 0

important point! that criticism applies to a lot of neuroscience of consciousness, most of the theories are more bundles of untested hypotheses

5 months ago 3 1 0 0

oh awesome, I have enough caffeinated drinks to keep me awake so please ask a question!

5 months ago 1 0 0 0