If you knowingly use a piece of software that is fundamentally designed from the ground up to fabricate text, you are not doing "hallucinations", you are doing criminal fraud and it should be described as such in media coverage
Posts by Matthew Chalmers
Google marks all Gemini-created text with its SynthID text watermarking system. They have released public detectors for media. They have not for text, and they have not publicly said why they haven't. Regulators and legislators should ask them.
They don't know meme: "They don't know I have a fully scalable agentic workflow"
"They don't know I have a fully scalable agentic workflow"
I am forever saying that if refusal isn't a live option in any decision making process about "AI", then no ethical practice is possible. You've got to be able to stop if the thing is unacceptable.
1 yr ago I wrote a massive thread on @iea.org huge AI / energy report.
WELL it's one year later and they've released an update, but notably, there is no chatbot. Well....I'm going to read it anyway, and you're all going to get spammed with notes :)
that's means a NEW ULTRATHREAD 🧵
Google to tap into gas plant for AI datacenter in sharp turn from climate goals Texas power plant would emit 4.5m tons of carbon dioxide per year, more than that of the entire city of San Francisco
"Asked by Axios last week at an energy conference in Houston about how natural gas jives with the company’s clean energy goals and overall strategy, Google’s head of advanced energy, Michael Terrell, said: “We don’t have anything to say on that.”"
www.theguardian.com/technology/2...
What is 493920 times 392930 9:44 PM 493920 × 392930 = 194,198,265,600
screenshot of ghgs and energy
dd alt text 493920 194,075,985,600 392930 1.94198E + 11 100.06301%
Okay I'm trying out this new chatbot energy / emissions calculator, it seems to align closely with other estimates I've done
1.32 watt hours for a calculation - to deliver the wrong answer (I consider any deviation > 0 to be a total failure for a calculation)
antarctica.io/ai-wattch
7. Generative AI Will Not Help Us Fight Climate Change You might have heard the claim that AI will help us solve climate change, issuing innovative solutions like a magician pulling a rabbit out of a hat. Tech leaders have pointed to the possibility as though it could outweigh data centers’ environmental harms. But the truth is, claims of AI’s usefulness to the climate fight are vastly overstated. As a recent report explains, “virtually all stated climate benefits relate to ‘traditional’ AI” — not the generative tools like chatbots and image creators that are driving the industry’s recent growth. Moreover, claimed benefits are seldom based on research or real-world uses. In other words, Big Tech says AI is an important tool to fight climate change, while virtually none of the generative AI driving the boom is actually being used to fight climate change.
Thank you @foodandwater.bsky.social for citing my latest report :)
"AI for climate" joins CCS, CDR, fossil hydrogen and SMRs / fusion as tactical hollow promises deployed to mute the noise of the carbon bomb going off tody and tomorrow -->
www.foodandwaterwatch.org/2026/04/10/d...
“So far I am not seeing the revolution in education,” admits chief learning officer of organization responsible for generating much of the hype about genAI in education www.chalkbeat.org/2026/04/09/s...
A GENTLE REMINDER: we are trying to eliminate fossil fuels because using them kills us.
If fossil fuels were cheap (they're not) or reliable (they're SO NOT), it would still be urgent to get rid of them because their intended use destroys our life support systems.
Almost a year ago, I was described in the FT as "a Cassandra with a wry grin and twinkling eye", and was entertained becaus Cassandra (famously) was right.
It's actually not fun, though, to watch the world do things you've been warning against:
www.newstatesman.com/technology/2...
We publish a major @citizenlab.ca report on Webloc, an ad-based mass surveillance system that monitors the movements and personal characteristics of hundreds of millions people globally based on data obtained from mobile apps and digital advertising. Customers include ICE, El Salvador, and Hungary.
A French coder estimates that using Claude Code will account for 1 tonne of CO2 over a year - 10% of the average French individual carbon footprint.
We have never encountered emissions-intensive software like this, ever before (with the sole exception of Bitcoin)
www.linkedin.com/posts/gwitte...
16 GW of power deals announced by 5 hyperscalers in 2026
More evidence that Big Tech is locking us into a new generation of fossil fuel infrastructure and destabilizing our climate: the 5 largest hyperscalers have announced 16GW of power deals this year, and only 3GW have been for clean power.
Doing a basic calculation using a chatbot instead of a calculator uses between 2.5 million and 62 million times more energy. And using chatbots for calculations is *extremely common* - the companies have marketed these systems as general purpose, everything-apps!
Meanwhile the European federation of teachers union is calling yours out. See below a useful book by our union to hold yours to a higher standard.
@etui.bsky.social @etuce.bsky.social
@aob.nl @ucu.org.uk
PDF for whole book here: www.csee-etuce.org/en/item/7245...
bsky.app/profile/oliv...
Here’s another example. I don’t know how to code, but I used AI to make a bathroom pass app. I explained to ChatGPT that I wanted a pass system in which a student scanned a code and received an email pass, and that I needed a spreadsheet at the end of the day that told me when and where students had gone. I asked ChatGPT to write it for Google’s Apps Script, so I was able to create the app without any conceptual knowledge of what I was doing. This does raise some issues concerning accuracy and especially long-term maintenance. I am trying to be more intentional about what AI generates. I appreciate that AI lets me build things I couldn’t have otherwise; I just want to be thoughtful about how I use it.
Honeychile if the issues raised for you by vibecoding a digital surveillance app for children's visits to the bathroom have to do with "long term maintenance", you could be making a hell of a lot more money at Palantir.
Pretty persistently frustrating that enviro opposition to data centres get clumsily dismissed as "NIMBY" when, as you can see here, it's well-evidenced and packed to the brim with real-world examples of material harm.
And GP goes further than most in pointing out the end-goals of the system:
AI-driven manipulation comes in three forms, each tested in our research:
• Deepfake videos
• AI-generated misinformation articles
• Personality-targeted political ads
We ran multiple preregistered experiments to see if warnings protect people.
Spoiler: They largely don't.
2/10
A tweet reading 'big data is a secular inductivist cult with the belief that a critical mass of empirical information leads to a theoretical chain reaction'
I always loved this quote by Jan de Leeuw (from the before times over yonder)
Some first-rate science writing: For this story, @jdrakephd.bsky.social carefully read our recent paper and then we spent a very fun 90 minutes or so talking on zoom. His article that gets right to the heart of our model, explains it clearly, and then explores why it will matter in the future.
This is one of the studies I'm most excited about right now. Wish I could watch the launch, but I'll be traveling. But YOU should check it out.
Mentioned Lana's study as a great example of taking other humans seriously in this piece.
peoples-things.ghost.io/why-do-peopl...
A new seasonal fire byelaw for the Cairngorms National Park will come into force on Wednesday. From 1 April to 30 September each year, campfires and barbecues will not be permitted in the National Park.
www.walkhighlands.co.uk/news/cairngo...
Jon Hartley ® @Jon_Hartley_ X.com * Another update to our Generative AI US adoption time series results from our paper "The Labor Market Effects of Generative Artificial Intelligence": we find LLM adoption at work in the US fell over the past quarter (while still up substantially from a couple years ago). 100% Fraction of U.S. Labor Force Using Generative AI At Work 90% 80% 70% 60% (%) 50% 40% 30% 20% 10% 0% May-22 Dec-22 -Pew Survey (ChatGPT use) -I-Bick, Blandin, Deming Gen Al Survey -Hartley, Jolevski, Melo, Moore Gen Al Survey Public Release of ChatGPT (First Public Large Language Model) Jun-23 Jan-24 Jul-24 Feb-25 Aug-25 Mar-26 Oct-26
I like how every study that tries to prove AI is being adopted at scale is like “jobs that AI might be able to do a small amount of are sort of affected” and every other study on AI use is “adoption is low” and “it doesn’t really work reliably or in a way with measurable outcomes”
"The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people."
www.404media.co/rip-metavers...
I’ve spent the last couple of weeks talking to people about environmentally sustainable approaches to AI development, and the absolute raging cognitive dissonance of the huge under-investment in sustainable approaches compared to this kind of all-in hyperscaler data centre bonanza is wild