Advertisement · 728 × 90

Posts by Anna Burcombe

Of course. It's a beauty isn't it!

1 month ago 0 1 0 0
Post image Post image

Gorgeous magnolia tree bursting with flowers @qubelfastofficial.bsky.social

1 month ago 5 2 1 0
Preview
Network Seminar: "Hearing voices, suicidality, and AI-psychosis", 20th March, Birmingham and Online Location In person: Assembly Room, The Exchange, 3 Centenary Square, Birmingham, B1 2DR. Online: Teams meeting. Date and time 20th March 202...

📅Save the date! Our next Network seminar will be on "Hearing voices, suicidality, and AI-psychosis", March 20th 2026, Hybrid

This is a collaboration with @hearingvoicesmic.bsky.social and #ProjectEPIC

Find more info and registration here:

birminghamphenomenalnetwork.blogspot.com/2025/12/netw...

4 months ago 10 6 0 4
Post image

Two absolute legends coming to Northern Ireland: Prof Christina Puchalski and Prof Anne Vandenhoeck. I will be starstruck. Let everyone know who might want to know!

1 month ago 0 0 0 0
Post image Post image

I enjoyed an amazing day today at @isa-rc22.bsky.social conference. So many interesting papers and conversations to reflect on. And also I have fallen in love with India.

2 months ago 2 1 0 0
Preview
More than 200 killed in coltan mine collapse in eastern DRC, officials say Rubaya mine produces about 15% of the world’s coltan, which is processed into tantalum, used in mobile phones

This tragic incident highlights the completely exploitative underpinnings of our dependency on digital devices, and the limits of any talk in the global north of 'digital ethics', 'tech ethics' etc.

www.theguardian.com/world/2026/j...

2 months ago 93 48 1 0
Post image

blueprint to a life

2 months ago 14 2 0 0
The list of most relevant themes we identified is:
1. Rhetoric of inevitability and technological determinism: presenting the adoption and use of 
(generative) AI as a fait accompli.
2. Exaggerated narratives: overstating the general capabilities of the technology, or leaving out that 
certain seemingly impressive capabilities can only be achieved under very specific experimental 
conditions.
3. Spurious comparison to human intelligence or Anthropomorphism: presenting AI as if it thinks or 
reason like a human.
4. Ethics and critical washing: presenting AI as being ethically or critically examined but doing so only 
superficially and inconsequentially.
5. Wishful thinking and uncertain feasibility: assuming desired outcomes or functionality despite lacking 
realistic evidence they can be achieved.
6. GenAI is presented as indispensable: portraying AI as essential even when simpler or non-AI solutions 
are sufficient.
7. Unrealistic and ill-defined conditions: formulating requirements for adoption and use that are 
functionally impossible or too demanding to be met, psychologically implausible to follow, or set 
unclear boundaries for acceptable and unacceptable behaviour, which could easily create 
inconsistencies. 
8. Resources as propaganda: resources for students, faculty, and other stakeholders are made available 
but only for incentivizing different degrees of use of genAI.
9. AI Overkill: Substitution or replacement of tasks for which the technology was not designed; from 
tutoring to teaching to research, everything must be now with AI, even if it is not adequate.
Due to space limitations, rather than discussing all of the themes superficially, this essay addresses only the 
first four, which, in our view, are the most critical and the most urgently in need of critical scrutiny. T

The list of most relevant themes we identified is: 1. Rhetoric of inevitability and technological determinism: presenting the adoption and use of (generative) AI as a fait accompli. 2. Exaggerated narratives: overstating the general capabilities of the technology, or leaving out that certain seemingly impressive capabilities can only be achieved under very specific experimental conditions. 3. Spurious comparison to human intelligence or Anthropomorphism: presenting AI as if it thinks or reason like a human. 4. Ethics and critical washing: presenting AI as being ethically or critically examined but doing so only superficially and inconsequentially. 5. Wishful thinking and uncertain feasibility: assuming desired outcomes or functionality despite lacking realistic evidence they can be achieved. 6. GenAI is presented as indispensable: portraying AI as essential even when simpler or non-AI solutions are sufficient. 7. Unrealistic and ill-defined conditions: formulating requirements for adoption and use that are functionally impossible or too demanding to be met, psychologically implausible to follow, or set unclear boundaries for acceptable and unacceptable behaviour, which could easily create inconsistencies. 8. Resources as propaganda: resources for students, faculty, and other stakeholders are made available but only for incentivizing different degrees of use of genAI. 9. AI Overkill: Substitution or replacement of tasks for which the technology was not designed; from tutoring to teaching to research, everything must be now with AI, even if it is not adequate. Due to space limitations, rather than discussing all of the themes superficially, this essay addresses only the first four, which, in our view, are the most critical and the most urgently in need of critical scrutiny. T

For all those involved in drafting so-called AI guidelines, but being overwhelmed with nonsense, this is a lifesaver. Great work by Dagmar and Ariel!

Resisting Enchantment and Determinism: How to critically engage with AI university guidelines. doi.org/10.5281/zeno...

3 months ago 197 120 3 3

Human distress is understandable, when we make sense of a person's context. It might take time & space, some care & being safely held, quietness & rest...

All these things are possible in principle. But in reality so many people can't access these things- structural inequalities maintain suffering.

3 months ago 10 2 0 0
To encourage reuse of our data, Pew Research Center, with support from the John Templeton Foundation, invites researchers to submit proposals for new research publications that use one or more of the following datasets (collectively, Datasets) from the Global Religious Futures (GRF) project:  Global Restrictions on Religion 2007-2022 dataset. This cumulative dataset includes measures of government restrictions on religion and social hostilities involving religion in nearly 200 countries and territories. Spring 2024 Survey. This dataset includes measures of religion and spirituality in 35 countries. (Comparable data was also collected in 2023 and 2024 for the United States. The downloadable materials which accompany the international dataset include additional information about U.S. data.) Dataset of Global Religious Composition Estimates for 2010 and 2020. This dataset includes estimates of the size of seven major religious groups in more than 200 countries and territories. We encourag

To encourage reuse of our data, Pew Research Center, with support from the John Templeton Foundation, invites researchers to submit proposals for new research publications that use one or more of the following datasets (collectively, Datasets) from the Global Religious Futures (GRF) project: Global Restrictions on Religion 2007-2022 dataset. This cumulative dataset includes measures of government restrictions on religion and social hostilities involving religion in nearly 200 countries and territories. Spring 2024 Survey. This dataset includes measures of religion and spirituality in 35 countries. (Comparable data was also collected in 2023 and 2024 for the United States. The downloadable materials which accompany the international dataset include additional information about U.S. data.) Dataset of Global Religious Composition Estimates for 2010 and 2020. This dataset includes estimates of the size of seven major religious groups in more than 200 countries and territories. We encourag

Please share: Pew Research Center will provide $3,000 each for 19 new papers using our recent global datasets. We encourage reuse of our Pew-Templeton Global Religious Futures data!
www.pewresearch.org/2026/01/16/seeking-resea...

3 months ago 137 110 1 6
Advertisement
Preview
Critical AI Literacies for Resisting and Reclaiming | Radboud University This course is designed to foster critical AI literacies in participants to empower them to develop ways of resisting or reclaiming AI in their own practices and social context.

☀️ Summer School 📚

“Critical AI Literacies for Resisting and Reclaiming"

Organisers and teachers:
👉 @marentierra.bsky.social
👉 @olivia.science
👉 myself

Deadline for application:
🐦 31 March 2026 (early bird fee)

1/🧵

www.ru.nl/en/education...

3 months ago 101 59 2 3

CAIL = prerequisite knowledge for a critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims. For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. olivia.science/ai

🧵

5 months ago 458 161 10 65
Post image

Academic books should have more of this playful aesthetic

3 months ago 1 0 0 0
Preview
The Oligarchs Pushing for Conquest in Greenland Trump’s fixation on filching the island territory from Denmark may seem like the demented ravings of a mad king. But to a cohort of plutocrat weirdos, it makes perfect sense.

Casey Michel, on the oligarchs who want Greenland

newrepublic.com/article/2051...

3 months ago 788 360 49 60

Such a lovely book title. Added to the list!

3 months ago 1 0 0 0
Post image

I love this paragraph in @neilselwyn.bsky.social book "what is digital sociology" on Lewis Mumford.
"Forget the damned motor car and build the cities for lovers and friends."

3 months ago 3 0 1 0
Preview
I've signed this petition calling for the government to shut down X for producing & sharing sexual abuse images. Will you do the same? I've signed this petition calling for the government to stop X allowing their chatbot Grok to ‘digitally undress’ women and share sexualised and illegal photos of children. Will you sign too?

Sign it, share it. Can't believe we have to petition our leaders for monsters to stop being monsters but it just seems par for the course these days.

3 months ago 5 4 0 0
Thinking Through...The AI Con & Deconstructing the Hype
Thinking Through...The AI Con & Deconstructing the Hype YouTube video by Thinking Through Podcast

Worth waiting for! @alexhanna.bsky.social and I did a LOT of podcast guesting last spring & summer and this one, one of my favorites, is finally available:

www.youtube.com/watch?v=LJmk...

3 months ago 49 16 1 4
Preview
TNC | Sundance Interview 2026 | Valerie Veatch The untold origins of artificial intelligence lie not in machines but in power, revealing the fantasies behind the hype that got us here and where we go next.

"Ghost in the Machine approaches ubiquitous questions like “What is AI?”, “Who is building it?”, and “What will humans become?” by exploring how emerging technologies have historically reshaped identity, culture, and global power..."

Read the full interview with @valerieveatch.bsky.social here!

3 months ago 2 2 0 0
Advertisement
Table 1: Typology of traps, what goes wrong if not avoided, and how the traps can be avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.

Table 1: Typology of traps, what goes wrong if not avoided, and how the traps can be avoided. Note that all traps in a sense constitute category errors (Ryle & Tanney, 2009) and the success-to-truth inference (Guest & Martin, 2023) is an important driver in most, if not all, of the traps.

We present a typology of traps to avoid:

1. Believing that AI systems are minds

2. Believing that AI systems are theories

3. Believing that cognitive science can be automated.

Learn to recognise and avoid these traps. Failure to avoid leads to numerous problems.

3/🧵

3 months ago 235 70 9 2
Post image

Queen's @qubelfastofficial.bsky.social looking beautiful this morning.

3 months ago 4 1 0 0
A medium close-up of a woman with short dark hair, wearing a dark green top and a patterned gray scarf. She is holding a pen and looking slightly up and to the left with a focused, engaged expression. The background is slightly blurred, showing a window and a dark-colored laptop.

A medium close-up of a woman with short dark hair, wearing a dark green top and a patterned gray scarf. She is holding a pen and looking slightly up and to the left with a focused, engaged expression. The background is slightly blurred, showing a window and a dark-colored laptop.

man in a gray-green collared shirt and dark-rimmed glasses is seated, looking intently forward and slightly up, appearing engaged in a discussion or lecture. He is holding a tablet that displays a screen with a flowchart or diagram featuring colored boxes. He is seated at a table with a dark wooden edge near a large window. Two other people are visible, blurred, seated behind him in the background: a man in a dark shirt and a woman in a black and white plaid shirt and glasses.

man in a gray-green collared shirt and dark-rimmed glasses is seated, looking intently forward and slightly up, appearing engaged in a discussion or lecture. He is holding a tablet that displays a screen with a flowchart or diagram featuring colored boxes. He is seated at a table with a dark wooden edge near a large window. Two other people are visible, blurred, seated behind him in the background: a man in a dark shirt and a woman in a black and white plaid shirt and glasses.

Close-up shot of a woman with long blonde hair and maroon-framed glasses looking intently down at a silver laptop screen, focusing on her work. She is wearing a green patterned top with a ruffled collar. A man in a dark jacket is visible, out of focus, seated behind and to the left of her.

Close-up shot of a woman with long blonde hair and maroon-framed glasses looking intently down at a silver laptop screen, focusing on her work. She is wearing a green patterned top with a ruffled collar. A man in a dark jacket is visible, out of focus, seated behind and to the left of her.

Ready to upskill or change direction in 2026?

Ulster University’s Skill Up programme offers FREE postgraduate certificates & short courses funded by the Department for the Economy NI starting January 2026.

Read more: https://ow.ly/CR5L50XCCk2

#WeAreUU

4 months ago 2 1 0 0

I really enjoyed your lecture this evening! Thank you!

4 months ago 1 0 1 0
Post image Post image Post image Post image

I had a magical few days in Cambridge, marking the end of a chapter. And having brunch with the absolute legend and a hero of mine @karenod.bsky.social made it even more so special. Thank you @westcotthousecam.bsky.social @angliaruskin.bsky.social

4 months ago 4 1 0 0

Registered and hoping/planning to go!

4 months ago 2 0 0 0

Hospital colleague said to me she doesn't feel equipped for the increasing psychological distress that patients are bringing. I know she's highly skilled & caring, but just so stretched & overwhelmed. The solution here isn’t more training, but some regular space to pause, reflect, & feel supported.

4 months ago 11 4 0 1
Advertisement
Post image Post image

My fungus is fruiting!

4 months ago 2 0 0 0

Wow- I hear you. I felt this heavy in my chest.

10 months ago 1 0 0 0

It sounds so good!

11 months ago 1 0 0 0