This should be illegal. At any company.
Posts by Mordechai Rorvig
“A whole civilization will die tonight” is the most vile thing a US president has ever said, certainly during the post-1945 era when they’ve had the power to kill civilizations with the dropping of a bomb. I’m staring into the darkness. May this not be one of the most fateful days in human history.
In all seriousness, one thing that might be worth doing today:
Tell your Reps and Senators to call Adm. Richard Correll, the Commander of U.S. Strategic Command, which controls the nation's nukes, and remind him of *his* unique responsibility to refuse an illegal order.
The thing I have increasingly come to appreciate, which intimately guides my work (or attempted work) as a journalist, is that we are all moral philosophers, we are all deciding every day what we should do on the basis that some decisions are harmless, and some are as bad as setting children on fire
"I think this administration is trying to justify the war the same way Jackson Pollock used to paint. You just throw a bucket of reasons up against the canvas and hope the result looks good."
www.pbs.org/newshour/sho...
Whatever real journalism you like, please pay for it. The stuff you don't pay for is transforming into propaganda with incredible speed.
New story out at Foom, where I've written about how researchers of military AI have been increasingly shifting to consider strategic impacts, like whether AI will lead to new wars being started.
www.foommagazine.org/militaries-a...
This is ridiculous. Really bad from @arstechnica.com. This cannot happen.
theshamblog.com/an-ai-agent-...
You are arguing that digital ads are good because they empower Google, which you claim is good: ".. enable a powerful technology to become a global utility." But Google is not a public good or a public utility. It is profit driven. You are conflating providing utility with being a public utility
Strong disagree. You have to ask what ads actually are and whether they are good for society. Historically, when ads were sold by institutions like newspapers, they were part of an ecosystem that explicitly valued public service. Outside of such value systems, they are *not* good; self-evidently
AGI is already here.
This is something I have felt now for a while—glad to see a more formal argument put forward. There are important deficits with AI, for example, as described in arxiv.org/abs/2510.18212. But basically, it's here. And we need to deal with that.
www.nature.com/articles/d41...
In historical AI safety research, one of the kind of 'grand catastrophic risks' that was always talked about was having an intelligence explosion that wasn't controlled or regulated. Now, such research is increasingly pursued ... without a safety component.
www.foommagazine.org/is-research-...
Great to see our "From Language to Cognition" work featured in @mordecwhy.bsky.social's latest piece on language models and the brain. Glad to contribute to the conversation!
www.foommagazine.org/language-mod...
FOOM / NEW STORY OUT: "The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions."
www.foommagazine.org/language-mod...
It often feels like, in a mental health or depression context, whatever it is that is wrong with me is so deeply entrenched that neither I nor anyone else would ever be able to figure out what it is.
In my latest for Foom, where I'm trying to provide free, high-quality, independent reporting on AI safety, I wanted to interview someone who could help me understand the challenging internal dynamics of the community. This was @ilex-ulmus.bsky.social.
www.foommagazine.org/the-moral-cr...
"When dealing with the Big Cats, who are literally killing machines, there is always a distinct energy or electricity when you are in their presence." -Leif Cocks.
Probably also the best description of humans
This kind of statement triggers me and it's the reason why science journalists and neuroscientists need to speak out, loudly, about the analogies discovered between DNNs and cortexes. People need to know we are not just dealing with an MS Word technology here
www.theguardian.com/lifeandstyle...
Do it
Absolutely right. This is as big of a risk right now as anything else. It's flabbergasting how current government thinks anyone is being fooled about this.
New article covering recent findings from October: Models that maximize business performance in realistic role-play scenarios are also more likely to inflict harms.
www.foommagazine.org/leading-mode...
What does it mean when the study, the study's reviews, and all the other studies citing the study, which is actually a good and interesting study, all show clear signs of AI writing (without acknowledgement), lol
Kinda fits with the picture of DNN models of brain regions and high capability DNN models also typically requiring high-dimensional spaces, I guess? (shameless plug)
www.foommagazine.org/scientists-m...
Lol as long as it's cute we're good
Good points. I think most intervention and pushback against the status quo here is probably good. I might nitpick that 'fully automating cancer cures' or 'accepting job displacement' could both be contended against, the first as oversimplified, the second for reasons you mentioned. Complex topic!
Is any alignment research valid if it does not engage with the fact that we are surrounded by highly misaligned technologies, in highly misaligned societies, created by highly misaligned individuals?
Yes, BCIs seem in need of stringent regulation probably more than any other technology ever invented. The regulation vacuum for AI does not lend optimism. We are going to need serious public interest advocacy from neuroscientists if this is going to be anything besides severely dystopian