Didn't think I would be reposting Pope Leo today, but here you go
Posts by Olivier Driessens
Microsoft and other US tech companies successfully lobbied the EU to hide the environmental toll of their data centers, Investigate Europe reports in collaboration with Tech Policy Press and other media partners.
“The researchers found that 40% of workers had encountered workslop within a month, and then spent an average of 3.4 hours a month dealing with it – which the study estimates adds up to $8.1m in lost productivity for a 10,000-person organization.”
www.theguardian.com/technology/2...
Screenshot of article entitled "A breakup letter with media studies" by Nabil Echchaibi
"Unsettling a field, a theory, an epistemology seems so insignificant when genocide has become ambient, when the field we operate in has chosen not to reply, to defer their outrage to a safer future, to wait for things to blow over."
doi.org/10.1093/ccc/...
Very powerful piece
@icahdq.bsky.social
Wikipedia now has higher standards than all universities
"AI cannot coexist with education. It can only degrade it."
Everything old is new again.
Except this isn't that old, and we should really listen to the lessons of the past 25 years.
Most of the solutions to these problems aren't going to be technical. They are going to be normative and cultural and political, just like the social media questions were.
A California jury just found Meta and YouTube negligent for designing their products be addictive to children: www.washingtonpost.com/technology/2...
The Trump administration has agreed to pay TotalEnergies almost $1bn to pull out of offshore wind in the US and invest instead in oil and gas production.
Tax payers’ money to ensure continued demand for oil and gas.
Complete madness.
www.ft.com/content/ae51...
"ChatGPT, one of the most widely used models, covered distinctive content in 54% of responses but almost never credited the originating newsroom." www.niemanlab.org/2026/03/chat...
kind of seems like the people saying data center energy use is overstated were full of shit
Seeing the same burial image on social media, others turned to X’s AI assistant Grok to check its veracity. Like Gemini, Grok will breezily assure you the photo is not from Iran at all – although it lands on a different date, disaster and location. The image is “from Rorotan Cemetery in Jakarta, Indonesia – a July 2021 stock photo of Covid mass burials. Not Minab,” it says. In both cases, the AI answers sound sure: they don’t equivocate, and even provide “sources” for the original image, should you choose to check them. Follow the thread to examine those, however, and you’ll begin to hit dead ends: either the image doesn’t appear at all, or the link provided is to a news report that doesn’t exist. For all their impression of clarity and precision, the AIs are simply wrong. The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.
To everyone out there who defends and encourages reliance on generative AI: I want you to explain to me how software systems that do this is not just defensible but something good and to be encouraged. Go on. Explain it to me, right now.
www.theguardian.com/global-devel...
link to chapter: link.springer.com/chapter/10.1...
Feel free to ask for the PDF if you cannot access the full text.
Flyer The Need to Rename Tech with discount code
Happy to have a chapter in this fabulous book co-edited by Crystal Chokshi and Robin Mansell. 15 chapters argue how and why to rename tech, including the smart city, ChatGPT, and, in my case, predictive technologies.
Beautifully illustrated by Doan Truong.
link.springer.com/book/10.1007...
🌐🏫 #AI is rapidly transforming #HigherEducation, but who governs it?
📚 At last week's joint ETUI-ETUCE book launch, scholars, researchers and trade unionists explored how artificial intelligence is reshaping universities.
🔗 Access the new book here: etui.org/437
@etuce.bsky.social
@magpische.bsky.social
yesss shove it in my veins
Chapter 11
Strategies for organising against AI in higher education
Robert Ovetz and Lindsay Weinber
> the future of AI in higher education is not a foregone conclusion. Academic workers can organise to contest,
refuse and ban extractive AI
Although AI consumes a lot of water and energy, its biggest environmental impact is that it reproduces values that got us into the climate crisis in the first place: extraction, productivity, and efficiency.
I spoke to @paulschuetze.bsky.social about his research on this!
Why Language Models Hallucinate. A paper released by OpenAI in September of 2025.
Back in September, OpenAI released a paper showing that ChatGPT will always make things up.
Not sometimes. Not until the next update. Always. It's how the system fundamentally works. Which means there is no "fix."
Tired of AI hype posts? You might like my sober assessment of whether AI-generated summaries are suitable for studying and research. Spoiler alert, they are not.
The text is primarily aimed at students and researchers, but has much broader relevance. So share freely!: www.tue.nl/en/our-unive...
My university might be OK with it de-skilling students but I am certainly not going to let it de-skill me
“Rejecting or resisting a commercial technology designed to attempt a mass wealth transfer and to erode public institutions is a valid political position.”
It's U.S. foreign policy that the whole world must serve up our data to the AI giants that are now an arm of the U.S. government.
www.reuters.com/sustainabili...
The launch of every piece of AI-driven surveillance/security/criminal justice software basically goes like this:
AI COMPANY: this is an error free, bias free, state of the art, highly intelligent software system
Two Weeks Later: …aaaand it’s racist