Super excited to finally be able to share a project I've been working on for quite some time β a new paper on the Singularity Hypothesis! We argue that there are more good arguments for it and fewer good arguments against it than a lot of philosophers assume.
philpapers.org/archive/KIRR...
Posts by Cameron Domenico Kirk-Giannini
Philosophers and AI folks β I'm writing a paper on the singularity hypothesis, and I'm looking for some recent (i.e. since late 2024) expressions of skepticism about it from philosophers or ML folks that I can quote. The more well known the person, the better! Any ideas?
Social philosophers! Check out this short new paper in which I revisit my dilemmatic account of gaslighting and think about what kind of evidence should lead us to doubt our epistemic competence in different domains.
philpapers.org/rec/KIRGAE
Excited to share a new review paper I wrote with William D'Alessandro about the range of exciting philosophical and technical work currently being done on AI safety! Forthcoming at Philosophy Compass.
philpapers.org/archive/DALA...
Third, in "AI safety: A climb to Armageddon?" Herman Cappelen, Josh Dever, and John Hawthorne ask a question that gets far too little attention in AI safety: Could the work we're doing simply be ensuring that safety failures will be worse when they occur?
link.springer.com/article/10.1...
Those without institutional access can download Sven's paper here:
cd.kg/wp-content/u...
Second, in "Off-Switching Not Guaranteed," Sven Neth describes a number of important problems for Stuart Russell's idea of provably beneficial AI.
link.springer.com/article/10.1...
First, in "Bias, Machine Learning, and Conceptual Engineering," Rachel Rudolph and colleagues explore the connections between LLM training and conceptual engineering, with special attention to questions of bias.
link.springer.com/article/10.1...
Exicted to share *three* important new papers from the special issue on AI safety!
It's finally out! π Click to find out whether YOUR AI assistant is a moral patient!
In all seriousness, though, this is an important project and I hope it helps advance discussion of the possible moral properties of artificial systems.
link.springer.com/article/10.1...
My paper "How to Solve the Gender Inclusion Problem" is now typeset and officially citable!
www.cambridge.org/core/journal...
Those without institutional access can find the paper here: www.cd.kg/wp-content/u...
Excited to share this paper by Christian Tarsney from the special issue on AI safety I'm editing. It defends a useful new account of deception and manipulation in AI systems.
link.springer.com/article/10.1...
We argue that the best way to think about AI safety has it include *both* work on catastrophic risks and work that's traditionally been situated within AI ethics.
This matters because disciplinary boundaries affect who's treated as an expert and who gets to help set policy.
By now you've probably heard about AI safety β but have you ever wondered what AI safety actually *is*, or how it's related to AI ethics?
Well, you're in luck! Jacqueline Harding and I have a new paper answering these questions.
philpapers.org/archive/HARW...
Our goal in the paper is to provide a readable introduction to the main issues in this area, together with references to relevant literature and some of our own takes on the state of the debate. We hope the paper will serve as a go-to reference on AI risk arguments for the next couple of years.
Philosophers and AI folks β I'm excited to share a new paper on AI and catastrophic risk, coauthored with Adam Bales and Bill D'Alessandro, which is now forthcoming at Phil Compass!
philpapers.org/rec/BALAIA-5
Hello, world!