tandfonline.com/doi/full/10.... What's up with AI and energy? In this article we explore the growing hunger of AI for energy and the growing intertwining of the energy industry and the AI infrastructure investment through power purchase agreements, especially for nuclear power
Posts by Fabian Ferrari
What if “sovereignty” does not capture how power works in AI value chains?
Join us in Berlin for "Sovereignty Exhaustion: New Vocabularies for the Geopolitics of Global AI Value Chains", co-organized with @schneiss.bsky.social and @grohmannrafael.bsky.social.
More here: tinyurl.com/y4d7e7xy
I'm pleased to share my contribution to @politicalgeography.bsky.social's AI forum: "The Latent Subject: AI, Recognition, and the Politics of Latent Space": www.sciencedirect.com/science/arti...
So happy to have @profgillian.bsky.social (alongside other great speakers) joining this workshop on ‘Techno-geographies of AI’ in Milan on Friday 27 March. Co-organised with @ludovico-rella.bsky.social and Darío Negueruela del Castillo.
@fabianlferrari.bsky.social
In early 2024, researchers were already heavily using AI for work - Survey of 816 verified authors via Semantic Scholar - 81% of researchers reported using LLMs in their workflow - Top uses: information seeking & editing - Rare for data tasks: 6973% never use LLMs for data cleaning or generation
The measurement problem LLM content has risen sharply in both review and non-review papers. Review papers do have a higher prevalence rate. But non-review LLM papers outnumber review papers 6x. CS.CY Computers & Society) faces potential 50% cuts compared to CS.CV (Computer Vision) would only face 3%
Interdisciplinary researchers — who move between cultures and write in the “borderlandsˮ — are experts at adapting their writing. LLMs currently are not.
Private information can appear in unlikely prompts
I gave a short talk at Cornell yesterday on my science-of-science work investigating how AI is being used by researchers and how we should go about crafting policies in response.
Blanket policies are hard, privacy is important, we need more measurement.
Slides: drive.google.com/file/d/1gNTK...
Abstract of article: “ABSTRACT Governing Artificial Intelligence (Al) is difficult, in part, because Al systems never stand still in any one place. They are usually made by private companies, hidden within proprietary infrastructures, spanning jurisdictions, behaving in ways that are difficult to predict, and talked about in messy discourses of hype and panic. I suggest here that all this dynamism and uncertainty could be tackled by understanding Al and its governance as multi-scalar phenomena. Drawing on DiCaglio's idea of a 'scalar view,' defining Al as a scalar media technology, and tracing journalism's encounters with Generative Al as scalar collisions - across practices, organizations, data, audiences, and engineering - I argue that Al governance is 'scale work', and that multi-scalar governance offers new ways to understand Generative Al and its stakes.”
New paper!
In @icsjournal.bsky.social I argue that governing #AI means “scale work” — the labour of stabilizing AI *across* relationships that are usually tackled in isolation.
I use journalism’s GenAI encounters as a case study, connecting siloed AI collisions
www.tandfonline.com/eprint/T7WWF...
Join us online this Thursday!
Very glad to take part on "Humanities in Times of Geopolitical Turmoil" seminar series organised by @utrechtuniversity.bsky.social and @fabianlferrari.bsky.social.
📆 Follow the thing AI: 20/11/2025: 15:30 - 16:30
@oii.ox.ac.uk
cdh.uu.nl/event/cdh-on...
Happy to be part of this amazing speaker series organized by Utrecht University, especially @fabianlferrari.bsky.social . I'll talk about Latin American Critical AI studies. I'll be in such great company with @nsrnicek.bsky.social and @anavaldi.bsky.social
cdh.uu.nl/event/cdh-on...
What are the lessons of social media governance for generative AI governance?
Check out the third article of our @icsjournal.bsky.social special issue by @pmnapoli.bsky.social and Suher Adi.
www.tandfonline.com/doi/abs/10.1...
Have a new piece on social media's lessons for the governance of generative AI in Information, Communication, and Society, co-authored with Suher Adi.
www.tandfonline.com/eprint/MBACG...
If OpenAI shifts its policies, why and how do other platforms follow suit?
The second article of our special issue, written by Chris Chao Su and Ngai Keung Chan, is now online!
www.tandfonline.com/doi/full/10....
Probably the most surprising thing about this confrontation is that it took more than 180 days to happen
"In Silicon Valley, some investors ask whether the AI infrastructure boom will become what fibre optic cables were to the dotcom era."
www.ft.com/content/0e24...
Who decides what counts as theft when AI copies your style?
Check out the first paper of our special issue on generative AI governance co-edited with @joannekuai.bsky.social!
Elon Musk’s DOGE is tearing through the US government with disastrous consequences.
But beyond its borders, the extreme right is gearing up to push their own DOGE-inspired austerity campaigns in countries around the world.
It’s great to see this piece published in Platforms & Society:
journals.sagepub.com/doi/10.1177/...
It nicely brings critical platform scholarship into conversation with the lit. on state capitalism and techno-colonialism, through a rich case study set in post-pandemic Greece.
A narrow regulatory focus on misinformation distracts from addressing structural problems in the AI industry.
My chapter in the FEPS Progressive Yearbook 2024.
bit.ly/ai-infrastru...
Together with Joanne Kuai, I'm editing a special issue on generative AI governance in Information, Communication & Society.
Deadline for abstracts: 15 February 2024
Details: bit.ly/generative-a...
Our new paper in New Media & Society: "Observe, inspect, modify: Three conditions for generative AI governance"
journals.sagepub.com/doi/10.1177/...
In the meantime, over the past year Freedom House that at least 16 countries used generative AI to create content intended to mislead the public. The earliest tools were available only in English, limiting their usage around the world. At the same time, Freedom House notes that investigators in this realm have the same problem the Slovakian fact-checkers did: tools for assessing the authenticity of content posted online are limited and often inaccurate. They believe it is likely that the true number of countries experimenting with synthetic media is likely higher than 16.
At least 16 countries have already experimented with using generative AI to mislead their citizens: www.platformer.news/p/how-author...