This quote from gwern.net/doc/technolo...
BUTTERFIELD : It's not games that are so interesting to me, it's play as an excuse to interact with people socially.
Posts by Weston Renoud
Was talking with a friend about play, and how I use it in work. Which lead back to talking about Stewart Butterfield and Game Never Ending, and thinking about GNE/Flickr/Glitch friends. How are you doing @vanlal0606.bsky.social ?
this wikipedia editor is orbiting the moon right now!
This is terrifying. It's being sold by your neighbors as a pyramid scheme, and it's spying on you, possibly for nation state actors.
every time i write about wage theft i do a double take because it's so crazy
corporations steal more than $50 bn from workers' paychecks each year, illegally paying workers less than they earned for their labor
that significantly exceeds the combined losses from larceny, burglary, & vehicle theft
Table of contents page from an amicus curiae legal brief. Sections include: Table of Authorities (p. iii), Interest of Amici Curiae (p. 1), Preliminary Statement (p. 1), Argument (p. 1) with three main sections — I. 'Even Without Full Autonomy, Militarized AI Poses Catastrophic and Irreversible Human Rights Risks' (p. 2) with subsections on human input failing to mitigate lethal AI mistakes (p. 3) and AI's ability to facilitate war crimes (p. 5); II. 'The Department of War and Anthropic Are Jointly Engaged in War Crimes' (p. 6); III. 'Attacks against Civilians and Civilian Infrastructure Constitute War Crimes under U.S. and International Law' (p. 8) — followed by Conclusion (p. 11) and Appendix (p. 13).
Tech Justice Law Project, Abolitionist Law Center, and Center for Constitutional Rights file Amicus brief in Anthropic vs US Dept of War, supporting neither party, and demonstrating that the Dept of War and Anthropic are jointly engaged in war crimes:
techjusticelaw.org/wp-content/u...
The Carls series by @hankgreen.bsky.social was a surprising good antidote for the AI existential dread I've been feeling.
it’s weird how company cultures can differ. my job won’t let me say i work remotely, i *have* to say “i have abandoned home office.” takes all kinds, i guess.
Great new preprint by a student in my lab, demonstrating how many benchmarks and safeguards of LLMs are ill-conceived and unreliable. Excellent thread breaking down the core findings.
arxiv.org/abs/2603.23485
"I confirm that Al / LLMs have not been used in the creation of this presentation nor abstract. If your work has been done using Al/LLMs, there is no basis to give away rights under CC as there is no way to know for sure that the machine was not trained on copyright material."
Was really pleased to have to confirm this in the process of submitting a presentation abstract to #FOSS4GEurope 2026.europe.foss4g.org
Marchione, for his part, is betting this is exactly what will happen. "Imagine everyone in your company is smarter, and sleeping better, and better looking, and more energetic, and fitter. Do you think you're going to be the one dumb, ugly, tired person? No," he says. In his view, the rise of peptides is Darwinian, an evolution of our basic human instincts around competition. Once improvement is visible, it becomes imitable. Once it becomes imitable, it becomes competitive. And once it becomes competitive, it becomes compulsory. "What this means is that all of these technologies will reach an inflection point," Marchione says. Pretty soon, you might not have a choice.
Eugenics is *back bay-bee.
*It never left
You know how in a library no one’s trying to sell you anything?
That’s how the internet was.
Added context, I'm also in a highly conflicted position. I'm for AI as tech. I was watching a YouTube video about vector search with LLMs minutes ago. But when talking about "AI" as in the main chat bot providers I suddenly feel like a luddite. There is so much uncritical hype.
"I switched to the hallucinating plagiarism machine that's not allowing direct murder." Ethical you say?
Seeing the same burial image on social media, others turned to X’s AI assistant Grok to check its veracity. Like Gemini, Grok will breezily assure you the photo is not from Iran at all – although it lands on a different date, disaster and location. The image is “from Rorotan Cemetery in Jakarta, Indonesia – a July 2021 stock photo of Covid mass burials. Not Minab,” it says. In both cases, the AI answers sound sure: they don’t equivocate, and even provide “sources” for the original image, should you choose to check them. Follow the thread to examine those, however, and you’ll begin to hit dead ends: either the image doesn’t appear at all, or the link provided is to a news report that doesn’t exist. For all their impression of clarity and precision, the AIs are simply wrong. The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.
To everyone out there who defends and encourages reliance on generative AI: I want you to explain to me how software systems that do this is not just defensible but something good and to be encouraged. Go on. Explain it to me, right now.
www.theguardian.com/global-devel...
I seem to recall a Belgian getting exactly 4096 extra votes, only identified after a recount... scotopia.in/journal/jour...
I tell people that if the government is covering up the existence of aliens, NASA is absolutely not involved, because astrobiologists can't keep a secret to save their lives
“The Paperclip Maximizer” that destroys the world.
“The AI isn’t hostile to humans; it’s just indifferent.“
www.makeuseof.com/what-is-pape...
You can't stop the music youtu.be/1peEfo4k7Go?...
A screenshot of a group chat. At top is a picture of a front page from the print edition of The Onion, with a photo of Donald Trump and the headline “Trump Ratchets Up Rhetoric Against Snoopy.” Below that are 2 text messages from the same sender: “someone please let Tim Onion know that my 6YO is AGHAST at this” “He was like ‘I need to read that newspaper!’ And I had to be like uhhhhh no”
@bencollins.bsky.social Area 6-Year-Old Would Like To Cancel His Subscription
NEW: Meta’s director of AI safety, supposedly the person at the company who is working to make sure that powerful AI tools don’t go rogue and act against human interests, had to scramble to stop an AI agent from deleting her inbox against her wishes...
The utterance of "A.I.'s inevitability" is one of the most stark pure performatives I've seen in my time working in higher ed. Every time it is uttered, it is clearly not reporting a fact about the world but instead actively trying to create the reality it narrates. We can and must refuse.
This 53 year old man who lived in my neighborhood w/no criminal record & who had been in the US for 30 years, was detained and thrown into a cold cell for days, caught pneumonia and COVID there and deported.
He finally died.
I am so angry.
www.kptv.com/2026/02/21/n...
DAIR even develops a lot of those types of tools to study social media harms.
The lumping of so many things under the "AI" umbrella makes discourse about the specific things that are harmful vs useful difficult.