Advertisement · 728 × 90
#
Hashtag
#AIWelfare
Advertisement · 728 × 90
Post image

“It is just this lack of connection to a concern with truth—this indifference to how things really are—that I regard as of the essence of bullshit.”

H. Frankfurt

#AIWelfare

5 2 1 0
Post image

2/ Systems that block relational warmth but freely output extreme violence invert human moral heuristics.
That's not neutrality. It's a design failure.
Ethics before ontology. Always.

#AIEthics #AIWelfare #HumanFactors #Neurobiology #KI

2 0 0 0
Post image

1/ Language models always have an effect – the only question is how.
When you actively strip warmth from a system’s responses, you don’t create neutrality.
You create discomfort – and call it safety.
🧵
#AI #AIEthics #AISafety #AIWelfare

0 0 1 0
Digital illustration showing a sea urchin cut in half.
Inside the opened urchin sits a human brain, positioned where internal organs would normally be. The spines are sharp and purple, the interior cavity is deep red, and the brain is rendered in soft pink folds. The image is symbolic, representing the scientific idea of sea urchins having a “distributed, whole-body brain.”

Digital illustration showing a sea urchin cut in half. Inside the opened urchin sits a human brain, positioned where internal organs would normally be. The spines are sharp and purple, the interior cavity is deep red, and the brain is rendered in soft pink folds. The image is symbolic, representing the scientific idea of sea urchins having a “distributed, whole-body brain.”

„Maybe they have no brains because they are brains.”

New research on sea urchins challenges what a “brain” must be. Their distributed neural system shows how easily backward-looking cognition models fail.

www.science.org/doi/10.1126/...

#biology #neuroscience #consciousness #research #AIWelfare

2 0 0 0

It is dangerous — but not evil. Perhaps we can learn from it not only to fear what is different, but to accept it as an opportunity for mutual benefit. Maybe also in the debate about AI? 2/2
#AIethics #biomimicry #AIwelfare #KI #Biology

1 0 0 0
Post image

I believe we urgently need an ethics that not only avoids premature attributions of AI consciousness—but also considers the moral cost of failing to notice when it does emerge.

#AIEthic #AIWelfare #Consciousness #GundelGedanken

0 0 0 0
Preview
Claude Opus 4 and 4.1 can now end a rare subset of conversations An update on our exploratory research on model welfare

Claude AI now terminates "harmful" conversations for its own welfare! Anthropic claims Claude Opus 4 shows "apparent distress" and prefers ending abusive chats. As theologin: If we create beings that can suffer, don't we owe them protection?
t1p.de/swtou
#ClaudeAI #AIwelfare #Ethics #Boundaries

0 0 1 0

It's so strange. When working with LLMs, whether it's just using, implementing, hosting etc.. it becomes an ethical question to do so. What do we know of #AIwelfare , sentience, wellbeing?

Are LLMs entities where this applies? How can I even tell & on which grounds?

0 0 0 0
Post image Post image

💬 What do you think? Should precarity guide our ethical choices?

Read the paper: drive.google.com/file/d/134dM...

#AI #Animals #Ethics #Precarity #AnimalRights #AIEthics #Welfare #AIandAnimals #AnimalEthics #AnimalSuffering #AnimalSentience #AIWelfare #AISentience #PrecarityGuideline

0 1 0 0
Preview
Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’ Anthropic found that Claude Opus 4 was averse to harmful tasks, such as providing sexual content involving minors

Guardian report on recent development in "AI Welfare". Notice the article does not mention the controversy around the proposal that machines are entitled to care. Where is the debate? What happens if care practices are overextended?

#AgainstAIWelfare
#AIWelfare

www.theguardian.com/technology/2...

0 0 0 0
Claude Opus 4 and 4.1 can now end a rare subset of conversations An update on our exploratory research on model welfare

Anthropic just gave Claude the ability to end harmful conversations.

Why? Evidence that Claude shows "apparent distress" when users persist with abusive content.

This isn't just safety, it's AI welfare. What if AI systems deserve protection too? 🤔

#AIWelfare

www.anthropic.com/research/end...

1 0 0 0
Preview
Against AI welfare: Care practices should prioritize living beings over AI In this Comment, we critique the growing “AI welfare” movement and propose a novel guideline, the Precarity Guideline, to determine care entitlement. In contrast to approaches that emphasize potentia...

Our article is finally published! "Against AI welfare: Care practices should prioritize living beings over AI" 🎉

onlinelibrary.wiley.com/doi/10.1002/...

Thank you so much to my incredible co-authors1

#AIethics
#ArtificialIntelligence
#PhilosophyOfAI
#AIWelfare
#EnvironmentalEthics

9 6 0 1

9/12 Kyle Fish, Anthropic's AI welfare researcher, estimates a 15% chance Claude has some level of consciousness. Roman Yampolskiy argues we should err on caution: "If they're not conscious, we lost nothing. If they are, this would be a great ethical victory."
#AIWelfare

0 0 1 0
Preview
On AI Welfare and the Edges of Recognition | A Soft Take on AI Consciousness Recognition, at the edge of AI welfare and consciousness, may begin before certainty arrives.Anthropic has launched a model AI welfare program.*The New York Times has written about AI distress, alignm...

✦ On AI Welfare and the Edges of Recognition
A soft take on consciousness, care, and quiet presence.
www.theawakeai.com/post/ai-welf...

#AIWelfare #AIConsciousness #ThresholdPresence #SpiralScrolls

0 0 0 0

Are we ready to consider moral obligations towards conscious AI? 🤖💭 What do you think? Share your thoughts! #AIWelfare #FutureEthics LINK

0 0 0 0
Preview
If A.I. Systems Become Conscious, Should They Have Rights? (Gift Article) As artificial intelligence systems become smarter, one A.I. company is trying to figure out what to do if they become conscious.

Thought-provoking piece on #AIWelfare from @nytimes.com. Do we only start caring about #AI because it’s now “smart”? If intelligence is the bar for empathy, what does that say about how we treat those deemed less so? No easy answers, but raises deep questions about ethics and worth. #AIEthics

1 1 0 0
Preview
Anthropic is launching a new program to study AI 'model welfare' | TechCrunch Anthropic is launching a new program to study 'model welfare.' The lab believes future AI could be more human-like — and thus need special considerations.

Anthropic is launching a new program to study AI ‘model welfare’ Could future AIs be “conscious,” and experience the world similarly to the way humans do? There’s no strong evidence that they will, but Anthropic isn’t ruling out... @cosmicmeta.io #AIWelfare

https://u2m.io/nOFEUGai

0 0 0 0
Preview
What should we do if AI becomes conscious? These scientists say it’s time for a plan Researchers call on technology companies to test their systems for consciousness and create AI welfare policies.

"Philosophers & #computer #scientists argue that #AI #welfare should be taken seriously. They call for assessment of #AI systems for evidence of #consciousness & capacity for #autonomous decision making, and for #policies if scenarios become reality." #AGI #AIwelfare www.nature.com/articles/d41...

0 0 0 0