🕯
🕯 🕯
🕯 🕯
🕯 churu 🕯
🕯 🕯
🕯 🕯
🕯
Posts by Morgan Thompson
If you work on values in science, and have noticed that the literature is a bit too focused on values in science in Anglophone democracies – and that it therefore stays silent on issues that we should be talking about – please consider submitting an abstract to our workshop! (24–25 August, Helsinki)
Lord of the Rings scene. Witch King menacingly stands above Eowyn and says “No man can kill me.” Next image is a triumphant close up of Eowyn’s face without her helmet. She says “I am no man.” Next image is Eowyn stabbing the Witch King through the helmet as he falls.
Women truly can do it all (defeat the Witch-king)
Lord of the Rings scene. Witch King menacingly stands above Eowyn and says “No man can kill me.” Next image is a triumphant close up of Eowyn’s face without her helmet. She says “I am no man.” Next image is Eowyn stabbing the Witch King through the helmet as he falls.
Women truly can do it all (defeat the Witch-king)
youtu.be/6xG4oFny2Pk?...
The tulip bulbs don’t know about Chicago’s Third Winter. :(
Excellent post on financial satisfaction and labor status for graduate students by Kino Zhao @kinozhao.bsky.social at APDA: apda.ghost.io/2025-survey-...
Short write up here: dailynous.com/2026/03/10/u...
Oh, this is quite lovely
This song is getting me through a rainy Friday.
youtu.be/K0nRktrfhPk
Join us as we sit down with visiting fellow Chris ChoGlueck and learn more about his work here at the Center! #philsci
Watch on YouTube: https://youtu.be/8F89ElOJbv4
Philosophy of science has a lot to say on the nature of evidence and EBM understands evidence and quality of evidence very narrowly.
“The term “evidence-based medicine”… has a ring of obviousness to it which makes it difficult to argue against. Few doctors, one suspects, would be willing to assert that they do not attempt to base their clinical decision-making on available evidence.” (Goldenberg 2006 2622).
Just adding onto your point: Goldenberg points out that the EBM framing seems to promote this misunderstanding about those who critique it. After all, who would oppose using the best evidence in medical decision-making?
www.sciencedirect.com/science/arti...
I saw a talk version of this paper and came away having learned so much about how distortions in medical data are produced from the top down through the categories we use. So glad it’s out now!
We are organising a pre-conference workshop on Values in science in the rest of the world in Helsinki in August, just before ENPOSS 2026. Please consider submitting an abstract!
Data labeling is notoriously brutal and underpaid work. Workers sometimes earn as little as a few dollars a day, work under algorithmic management, and, because they’re sometimes trying to train AI what not to do or show, they are often shown graphic, violent, or sexual content for hours at a time.
We are now accepting applications for the 2026 Doctoral Scholarship Competition! (The Competition closes at noon on Friday 27 March.) www.thebsps.org/funding/doct...
#philsci #philosophyofscience #phdfunding
Of so many things to be infuriated about, this is perhaps my biggest gripe. Pancreatic cancer patients, who face a future without hope, are being denied a potential game-changing therapy
www.mskcc.org/news/can-mrn...
“Confusion” doesn’t begin to describe our emerging predicament. Seventy-two percent of American teens have turned to A.I. for companionship. A.I. therapists, coaches and lovers are also on the rise. Yet few people realize that some of the frontline technologists building this new world seem deeply ambivalent about what they’re doing. They are so torn, in fact, that some privately admit they don’t plan to use A.I. intimacy tools. “Zero percent of my emotional needs are met by A.I.,” an executive who ran a team mitigating safety risks at a top lab told me. “I’m in it up to my eyeballs at work, and I’m careful.” Many others said the same thing: Even as they build A.I. tools, they hope they never feel the need to turn to machines for emotional support. As a researcher who develops cutting-edge capabilities for artificial emotion put it, “that would be a dark day.”
Developers I spoke to said the same incentives that make bots irresistible can stand in the way of reasonable safeguards, making outright abstention the only sure way to stay safe. Some described feeling stuck between protecting users and raising profits: They support guardrails in theory, but don’t want to compromise the product experience in practice. It’s little wonder the protections that do get built can seem largely symbolic — you have to squint to see the fine-print notice that “ChatGPT can make mistakes” or that Character.AI is “not a real person.” “I’ve seen the way people operate in this space,” said one engineer who worked at a number of tech companies. “They’re here to make money. It’s a business at the end of the day.”
But even if companies can curb serious dependence on A.I. companions — an open question — many of the developers I spoke with were troubled by even moderate use of these apps. That’s because people who manage to resist full-blown digital companions can still find themselves hooked on A.I.-mediated love. When machines draft texts, craft vows and tell people how to process their own emotions, every relationship turns into “a throuple,” a founder of a conversational A.I. business said. “We’re all polyamorous now. It’s you, me and the A.I.”
A genuinely alarming piece in the NYT about how the developers, scientists and assorted techbros behind "AI companions"/"synthetic care" do not even know or understand the potential harms of the tech they're developing but they're too greedy to stop themselves from developing it.
GM: Charisma check.
Mamdani: [rolls natural 20]
GM: that’s a d6 how did you
Mamdani: [direct to camera] Did you know you can check out board games at your local public library? 😊
NEW: The FDA recently approved the contraceptive implant Nexplanon for use up to five years (instead of three). But it also placed the device under a REMS protocol, a special layer of regulation that could affect access.
Many providers did not yet know about the REMS when I contacted them.
@schlawinerkreis.bsky.social @terezahendl.bsky.social and I are honored to receive the Charles Mills Prize from JOAP. We see the continuing relevance of Mills’ work and white ignorance, including for contemporary problems in official statistics.
We also hope that this draws attention to race/ethnicity in the German context and might contribute to expanding categorizations or processes of racialization discussed in philosophy of race.
@schlawinerkreis.bsky.social @terezahendl.bsky.social and I are honored to receive the Charles Mills Prize from JOAP. We see the continuing relevance of Mills’ work and white ignorance, including for contemporary problems in official statistics.
We are pleased to announce the winner of The Charles Mills Prize "Who Counts in Official Statistics? Ethical-Epistemic Issues in German Migration & the Collection of Racial or Ethnic Data”
Daniel James, Morgan Thompson & Tereza Hendl
#philsky
Read it here 👇
onlinelibrary.wiley.com/doi/10.1111/...
AAHHHHHHHHHHH
📖 Our Measurement Heretics monthly reading group looks at work that engages with scientific, medical, and social measuring practices of past and present ⚖️
It's free to attend, online and open to all! The next session is 17 Feb, 3.30-5pm.
Find out more & sign up 👇
www.durham.ac.uk/research/ins...
Last term I tried an experiment: I walked into my Tech and Design Ethics class, admitted that I had *no idea* what to do about ChatGPT - so I would let them figure it out.
As in: their first project was to decide and write the ChatGPT policy for the class.
Here's what happened: