This case is an important reminder that gendered spaces can be trans inclusive, and it’s not automatic discrimination because a cis person has to share a gendered space with a trans person. There is much more to do but it’s helpful to have legal glimmers to work with to resist the rabid transphobia.
Posts by Dean Cooper-Cunningham
You make a fantastic point
Is it just me looking at the crowd and being totally unsurprised at the demographic? Cursory analysis would suggest that the demographic that had it easiest appears to be railing against a system that benefited them, and pinning all the woes this country has on immigrants rather than politicians.
Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.
Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Protecting the Ecosystem of Human Knowledge: Five Principles
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
“We live on a floating rock, just get on with it” 💀
🤡🤡🤡
Had the pleasure of thinking with and being probed by these wonderful minds as part of the #EWIS2025 opening sessions. Maybe there is some hope in these transitional, liminal times…
Hope we cross paths, Gustav! Have been reading some of your work lately :)
🤡🤡🤡
In case you ever feel like a waste of space whose job is bullshit…
Jokes aside, how does shit always seem to float (into power)?
Could you post the signup link here?
Asked DALL-E to give me Moo Deng but Christmassy… didn’t expect a recipe 🦛🆘
Instead of listing my publications, as the year draws to an end, I want to shine the spotlight on the commonplace assumption that productivity must always increase. Good research is disruptive and thinking time is central to high quality scholarship and necessary for disruptive research.
The Amnesty International report was damning.
Are you doing simulations in IR? Consider using ChatGPT to help you design and run gameified learning exercises. It reduces time & resources, works as a sparring partner, and can be fully randomised. I wrote about this for the pedagogical course I took at Copenhagen Uni: curis.ku.dk/ws/portalfil...
📯📯
I’m in ✨
Listening to BBC Radio 4 this morning and in light of the revelations from the UK Covid inquiry about senior civil servants’ communications during the pandemic, it’s very clear that many of the UK’s political leadership are incompetent donkeys who can’t formulate independent arguments 🫏
Hand holding a hard copy of the journal International Affairs.
Hard copies of my article with @whereisdean.bsky.social just arrived! It's on Queering the Responsibility to Protect and the need to integrate queer perspectives and persecution into research policy and practice on R2P. Article Open Access here! academic.oup.com/ia/article/9...
Paper also available in Spanish here: static-curis.ku.dk/portal/files...
The paper is coauthored by folks who attended a public workshop in London in August 2023. We are proud of the outcome document and hope it signals the intention of civil society to work with outgoing and incoming IE SOGIs
Come join us in NYC on 23 Oct for the launch of our paper 'Queer Peace and Security: Recommendations to the United Nations Independent Expert on SOGI' 🏳️🌈🏳️⚧️🇺🇳
Register: eventbrite.com/e/queer-peace-…
Paper: static-curis.ku.dk/portal/files/369349432/Q...
A front cover for a booklet titled Queer Peace and Security: Recommendations to the United Nations, independent expert on sexual orientation and gender identity. The cover is gradient coloured white to pink from left to right and the text is bold and black with the title.
Reviewing the final proofs for the outcome document of a truly collaborative venture. This one is something I’m really proud of and super excited to deliver 🏳️⚧️🏳️🌈 more soon…