Advertisement · 728 × 90

Posts by Lenandlar Singh

Critical AI Literacy: Empowering people to resist hype and harms in the age of AI -- Symposium 2025
Critical AI Literacy: Empowering people to resist hype and harms in the age of AI -- Symposium 2025 YouTube video by Iris van Rooij

Critical AI Literacy Symposium, with Dagmar Monett, @lucyavraamidou.bsky.social & Miquel Pérez Torres, Linda Mannila, and @olivia.science

🎬 Video recording: www.youtube.com/watch?v=Fxyg...

📢 Symposium website: www.ru.nl/en/about-us/...

🤔 Critical AI Literacy website: www.ru.nl/en/research/...

6 months ago 71 41 2 4

I collected some materials on critical AI from my perspective; hope it's useful: olivia.science/ai

"CAIL is as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims"

7 months ago 215 79 8 25
A photo of the Center for Dewey studies with text reading "Emerson's Aesthetic Transcendentalist Legacy: The Idea of Beauty in American Art" and "A Dewey Center Lunchtime Talk by Dr. Nicholas Guardiano. In between the text is an image of Fallingwater, a home designed by Frank Llyod Wright, and  Thomas Cole's painting titled "View from Mount Holyoke, Northampton, Massachusetts, After a Thunderstorm-The Oxbow."

A photo of the Center for Dewey studies with text reading "Emerson's Aesthetic Transcendentalist Legacy: The Idea of Beauty in American Art" and "A Dewey Center Lunchtime Talk by Dr. Nicholas Guardiano. In between the text is an image of Fallingwater, a home designed by Frank Llyod Wright, and Thomas Cole's painting titled "View from Mount Holyoke, Northampton, Massachusetts, After a Thunderstorm-The Oxbow."

Join us tomorrow at 1 pm for Dr. Nicholas Guardiano’s talk, "Emerson's Aesthetic Transcendentalist Legacy: The Idea of Beauty in American Art," in the Center for Dewey Studies (Morris Library Basement). We will have snacks and beverages available, and you are also welcome to bring your lunch.

7 months ago 5 1 1 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

7 months ago 3944 1974 111 406

Delighted to see this out.

7 months ago 1 0 0 0
Metaphors of AI in Education: Discourses, Histories and Practices | Journal of Interactive Media in Education

Hot off the press! It's the JIME Special Collection, Metaphors of AI in Education: Discourses, Histories and Practices

edited by myself, @eam0.bsky.social, Giselle Ferreira and Kyungmee Lee

Check it out if interested in #AI in #Education (or metaphors generally)

jime.open.ac.uk/collections/...

7 months ago 5 3 1 0
Preview
Donate to Support Helen's Children After Her Passing, organized by Marcus Arvan Help provide Helen De Cruz's children with a better start in life after her un… Marcus Arvan needs your support for Support Helen's Children After Her Passing

For those who don't know, there is a gofundme set up for Helen's family.

They are very humble to not mention it, so I will.

www.gofundme.com/f/support-he...

10 months ago 82 34 0 2
Advertisement

Please consider contributing to this gofundme, this is really important for helping support Helen's family

10 months ago 10 2 0 0

Besides the personal tragedy, Helen's untimely passing leaves their family in a difficult financial situation. Donate to support them:

10 months ago 24 7 1 0

Condolences and love to you all.

10 months ago 2 0 0 0

A gofundme fund for Helen de Cruz's children, set up with Helen's blessing by @marcusarvan.bsky.social:

10 months ago 118 77 0 7
Post image

Free ECR/PGR workshop in Nottm next month. Details in the thread below, email or DM me to register. It will be a fun and interesting day. Please share.

11 months ago 6 5 1 1

You wouldn't know how much but you have made a lasting impression upon my heart in many ways. Thank you for everything. My love and peace to you and your family.

11 months ago 4 0 0 0
The vast majority I met, however, just seemed fully captured by the capitalist LLM hype machine, trying to build their academic reputations around validating LLM applications that might promise jobs and continued funding, in the exact same way industry seems to be desperately betting on a techno-utopian deus ex machina to our global unrest about diversity and climate. HCI has always been a bit dominated by corporate pop culture, but this extreme, at this moment, has never felt more tragic and heartless.

The vast majority I met, however, just seemed fully captured by the capitalist LLM hype machine, trying to build their academic reputations around validating LLM applications that might promise jobs and continued funding, in the exact same way industry seems to be desperately betting on a techno-utopian deus ex machina to our global unrest about diversity and climate. HCI has always been a bit dominated by corporate pop culture, but this extreme, at this moment, has never felt more tragic and heartless.

Frustrating but imho accurate description from @amyko.phd 's trip report from #CHI2025 - the largest academic conference on human-computer interaction.

medium.com/bits-and-beh...

11 months ago 7 1 0 1
Preview
addressing ‘the gap’ in the field One of the conventions of academic life is the work of justification. To justify. To say why we are going to do what we are going to do. We regularly have to justify why we want to research somethi…

More patthomson.net/2019/03/11/a...

11 months ago 0 0 0 0
Advertisement

Prof Pat Thomson has written about this and I find it so very useful patthomson.net/2021/07/05/t...

11 months ago 4 0 1 0
Post image

I am hiring a new Lab Manager to help run the NYU Center for Conflict & Cooperation.

We start reviewing apps on MAY 1st and it pays over $58,000. Please share with anyone who might be interested!

Please apply here: apply.interfolio.com/166620

See more details below:

11 months ago 88 86 1 3
Preview
The Plagiarism Machine If you don't mind, I'd like to return to that other big story that appeared recently in The Atlantic. No, no, not the one where editor-in-chief Jeffrey Goldberg describes how he was added to the Trump...

How can anyone cultivate a moral relationship to creative and intellectual work – their own and others' – if we're building it atop (or rather, pushing a button to autogenerate from) a technology of deception and theft? 2ndbreakfast.audreywatters.com/the-plagiari...

1 year ago 40 22 1 6
AI and the Disruption of Personhood | Oxford Intersections: AI in Society | Oxford Academic

I published a new article on "AI & the Disruption of Personhood" in the Oxford Intersections doi.org/10.1093/9780...

👤 Can AI possess personhood or be part of our personhood?
@esdit.bsky.social @utwente.bsky.social @utwentephilosophy.bsky.social #oxforduniversitypress #oupacademic #ethics #ai (1/4)

1 year ago 14 6 2 1

New article on AI and epistemic agency published in the journal Social Epistemology

www.tandfonline.com/doi/pdf/10.1...

1 year ago 16 3 0 0
Preview
Editing Services — Emily Herring

Hello bsky! Hire me to edit for you! I provide fast, professional, and personalised proofreading, copy editing, and developmental editing services. wellreadherring.com/editing

Reposts appreciated!

1 year ago 85 56 0 1
Preview
SIGCSE 2025: Rumination, resistance After more than 20 years attending more than 100 conferences, I still get a thrill from 3–4 days of academic networking. Connecting with…

Five themes from #SIGCSE2025: amyjko.medium.com/sigcse-2025-...

1 year ago 9 1 2 1
Advertisement
Preview
Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research | TechTrends The latest paper I can proudly add to my list of publications,  Venturing into the Unknown: Critical Insights into Grey Areas and Pioneering Future Directions in Educational Generative AI Research …

jondron.ca/venturing-in...

1 year ago 1 0 0 0
Preview
AI is ‘beating’ humans at empathy and creativity. But these games are rigged | MJ Crockett Research pitting people against AI systems gives AI an edge by asking us to perform in machine-like ways

My new piece in @theguardian.com

Techno-optimism is human pessimism.

www.theguardian.com/commentisfre...

1 year ago 452 157 22 48
Preview
Not So Fast On Friday morning, a few hours after I sent you the last newsletter, we picked Poppy up from the veterinary hospital. The nurse presented us with the little tennis ball Poppy had picked up in the park...

What if technology *isn’t* changing faster than it’s ever changed before? 2ndbreakfast.audreywatters.com/not-so-fast/...

1 year ago 18 6 0 2
Science as Culture Forum on “Tech Oligarchy”

Call for papers: "Tech Oligarchy" Forum by @sciasculture.bsky.social

#bigtech #sts #oligarchy #Trump #ElonMusk

think.taylorandfrancis.com/special_issu...

1 year ago 63 37 8 9
View of Prophets of progress: How do leading global agencies naturalize enchanted determinism surrounding artificial intelligence for education? | Journal of Applied Learning and Teaching ...

journals.sfu.ca/jalt/index.p...

1 year ago 3 1 2 0