📣 I am hiring a postdoc! aial.ie/hiring/postd...
applications from suitable candidates that are passionate about investigating the use of genAI in public service operations with the aim of keeping governments transparent and accountable are welcome
pls share with your networks
Posts by Dr. Jay
apparently a lot of people need to hear this: harmful practices that violate fundamental rights are not a matter of ethics or morality. please don’t frame it as “unethical”. the “ethics” lens undermines the fact that is it unacceptable under any condition. not a matter of debate
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Comment by Tom Diettrich on a linkedin post reading: "You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea." Original: https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)
Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.
>>
UNTIL IT’S DONE, Ep. 4: Sylvia Rivera
In the 1970s, queer New Yorkers had been pushed to the margins of NYC. Our trans neighbors faced immense cruelty. But in Sylvia Rivera, they found a champion.
As we combat Trump’s politics of darkness, her legacy can light the path forward.
we wrote this over 5 yrs ago
dl.acm.org/doi/abs/10.1...
As a cognitive scientist, I confirm that we don't know how humans think.
as a cognitive scientist, I confirm
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:
Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...
🧵 1/
Since you ask: we don't need tools that reduce the art of academic writing to average authorless output.
Find your own voice: ideophone.org/find-your-ow...
Also, the efficiency frame is suspect. We don't need more papers, faster; we need slow science. osf.io/preprints/os...
What drives the bidirectional relationship between metabolic and mental ill-health?
Read our new metabolic psychiatry paper, “An interoceptive model of energy allostasis linking metabolic and mental health” www.science.org/doi/10.1126/... led by @saramehrhof.bsky.social @hugofleming.bsky.social
the ideology is well documented in Gebru & Torres's paper
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
www.firstmonday.org/ojs/index.ph...
i would rather read your imperfectly written, typo riddled idiosyncratic ideas that show you have read my papers and are familiar with my lab’s work than a grammatically perfect and generic genAI written application/email … every damn time
Getting close to 50k views and I'm wondering is it just everybody is scared to say this and pleased I did? Because if there's so many of us who agree, trust me I'd know if 1k people disagreed with me let alone 50k, why are we letting AI ruin our universities?
Together we can turn back the tide.
One immigrant detained at Fort Bliss was given psychotropic medication with no record of consent. Another was placed on suicide watch, with no record of anyone actually watching them.
A new investigation reveals horrific violations at the Fort Bliss detention facility, even by ICE’s own standards.
this is so good. our paper is mentioned (or rather quoted extensively) from ~25mins onwards
feel welcome to read our paper: firstmonday.org/ojs/index.ph...
LIVE NOW!🔥
We have our fellow East Coast friends @brujajagaming.bsky.social and @k0ppk0pp.bsky.social here today to play a chaotic game of casual commander!
Thank you to our sponsors @dragonshield.bsky.social & @moxfield.com !
Don’t forget to like & subscribe! RT to share! #magicthegathering #edh
but let's focus on the *potential* benefits...
recognisable — see e.g. 'Allround is the new excellent' from a while back www.ru.nl/en/staff/new... (part of a series of blog posts a bunch of us wrote from inside the continental European system)
Cursed like every start of the academic year but extra
“She also detailed the strategy of “credentialism,” by which women hoped that if they accrued enough credentials, their gender became irrelevant. (It did not.)”
In my experience my gender became more relevant the higher I got in the academic hierarchy. Misogynist attacks get worse too
Sam Altman poking a laptop and asking it to hurry up
See also: roleplaying games.
About that... we audited the open source status of Lumo and found it came in rock bottom in the EU Open Source AI Index osai-index.eu/news/lumo-pr... — consider sharing more details to rise through the openness ranks, @proton.me 🫣
#OpenSource #OpenWashing #lumo
JOSEPH WEIZENBAUM COMPUTER POWER AND HUMAN REASON FROM JUDGMENT TO CALCULATION
I finally read computer scientist Joseph Weizenbaum’s 1976 classic “Computer Power and Human Reason.”
This book deserves a massive revival in our current age of grotesque and largely thoughtless AI creep into everything:
What could be more obvious than the fact that, whatever intelligence a computer can muster, however it may be acquired, it must always and necessarily be absolutely alien to any and all authentic human concerns? The very asking of the question, "What does a judge (or a psychiatrist) know that we cannot tell a computer?" is a monstrous obscenity. That it has to be put into print at all, even for the purpose of exposing its morbidity, is a sign of the madness of our times. Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not be given such tasks. They may even be able to arrive at "correct" decisions in some cases-but always and necessarily on bases no human being should be willing to accept. There have been many debates on "Computers and Mind." What I conclude here is that the relevant issues are neither technological nor even mathematical; they are ethical. They cannot be settled by asking questions beginning with "can." The limits of the applicability of computers are ultimately statable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.
There’s an enormous amount of stuff in this book I’d like to highlight, but start with:
“What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.”
Just FYI I really enjoyed the book that is being “critiqued” here. You might like it too. bookshop.org/p/books/the-...
Somehow they brought the convo to Dostoevsky AND Disco Elysium and it was like hitting a piñata of nerdery.