I find a lot of discourse on AI and Tech Sovereignty to be very... Circular. Great stuff from @chionwurah.bsky.social
Posts by Gaia Marcus
.@chionwurah.bsky.social on need for coherence on tech 'sovreignty'.
"We need to understand what we can own, control, & lead by ourselves; what we can access that is in the hands of allies we trust; & how we manage what we must obtain from those we do not trust"
giftarticle.ft.com/giftarticle/...
Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution.
www.nature.com/articles/d41...
Deeply grateful to the UK AI and Robotics research community for this Leadership Award. So many wonderful researchers were honoured with me tonight - my own award would not be possible without my amazing @braiduk.bsky.social @technomoralfutures.bsky.social @edfuturesinstitute.bsky.social colleagues
Very thoroughly earned vallor (!) - @shannonvallor.bsky.social given the Ai & Robotics Research Leadership Award!
Shannon Vallor's talk - ' The machine starts: taking stock of the AI and Robotics revolution '
It's a play on E.M. Forster's The Machine Stops - “THE MACHINE IS MUCH, BUT NOT EVERYTHING."
Shannon's call: WE NEED NEW AND BETTER AI/R STORIES: Stories about Al and robots beyond GenAl chatbots and agents Stories about humans using their own brains more with Al Stories about Al and robots that tread lightly on our lives and minds Stories about Al and robots that help mend social ties & trust Stories about Al and robots that repair parts of our world & institutions Stories about Al/R researchers and builders who ask and listen Stories about people enabled to choose the kind of Al/R they want
Fabulous to hear from @adalovelaceinst.bsky.social board member @shannonvallor.bsky.social at tonight's Responsible AI Robotics and AI awards - I'm always so inspired by her morally crisp and targeted calls for AI that centres people, augmenting our agency and thinking, rather than supplanting it
TIL what the Acela is :)
And yes - very much agree!
I yet again would like to propose that we all carry (privacy preserving, on device, non data retaining) decible monitors on our person. Especially when conducting - often sensitive - business meetings in open plan and public places.
This sounds fun!
🧵 I recently spoke with Taylor Owen @theglobeandmail.com Machines Like Us podcast about an urgent question: How do you make a technology safe when the political will to govern it has evaporated? And what happens if we don't? www.theglobeandmail.com/podcasts/mac...
Modi offers his MANAV (or human) Vision for AI @ #AIImpactSummit
M: moral and ethical AI systems
A: accountable AI governance
N: national AI + data sovereignty
A: accessible and inclusive AI
V: AI should be valid and verifiable
What these principles mean in practice in India remains to be seen.
So excited about our new board appointments.l: Ed Humpherson, @mmitchell.bsky.social and @geomblog.bsky.social
With expertise ranging across AI research, computer science and public statistics, they are Ada values aligned w a shared focus on the public interest, accountability, fairness and rigour
Governments regulate AI and deploy it across public uses. This AI Policy & Governance Working Group panel @ #AIImpactSummit examines accountability, procurement and safety when the state is both regulator and user. @gaiamarcus.bsky.social @ruchowdh.bsky.social @futureoflife.org impact.indiaai.gov.in
For people who care about that sort of thing, the membership of the National Data Library Expert Advisory Group has been published today - including me www.gov.uk/government/g...
You were very good and balanced, I thought
Listenng to BBCTodayProgramme - grt to hear thoughtful discussion frm parents on consequences & pitfalls of social media ban - & why we should be worried about adults screen time, too (or maybe more). Very live to the qstn of where are YP supposed to be going now, w reduction of third spaces fr thm
if you’re passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you
I can choose my bank, I can choose my online supermarket, I can pick where I buy books and clothes and watch TV. I cannot choose whether or not to interact with the state, it is not the same as going shopping.
"One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation."
The Trump administration is engaged in norm destruction—breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itself—the systematic preference for executive discretion over deliberative process—reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.
In a new commentary in Science that I think I'll be referencing a lot on Tech Policy Press, Alondra Nelson (@alondra.bsky.social) says while the Trump's approach to AI is widely understood as "deregulation," when you zoom out, that's not really what's going on. www.science.org/doi/10.1126/...
About the PhD: Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.
are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me
apply here: aial.ie/hiring/phd-a...
pls repost
...safeguards, to the decision to not have regulation that covers or partially covers entirely predictable harms.
I'd agree, but I'd say this is always the case. AIl technologies are a result of a series of decisions a series of people have made. In this case everything from the data models were trained on, to the capabilities that were prioritised, to (presumably?) fine-tuning, to releasing a tool without ..
I'd file having appropriate regulation and governance under "our ability to manage the risks" - regulation is essentially one of the tools for ensuring that those able to manage risks are held to do so. But 100% agree these aren't risks that can't be managed, they just aren't being.
Impossibile on which axes? As in technically or politically or both?
🏆 This is a pivotal opportunity for the UK government to distinguish itself as a leader in effective AI governance, and build a regulatory system that prevents harms before they happen.
📖 Learn more about our polling on AI regulation here: Great (public) expectations | share.google/WlX20c8lmYRD...
Stat picture card - 89% of the UK public say it is important to regulate AI independently
📢 This isn’t an unpopular idea: nearly 9 in 10 people in the UK want independent AI regulation. Yet the current oversight of AI falls far behind that of other sectors (like aviation, pharmaceuticals and financial services), with no clear plans for improvement.
Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors Compared to the following: Aviation Financial Services Pharma Food Safety Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across: Proactive risk monitoring Safety standards Independent standards Market entry authorisation Post-market monitoring Independent regulator Enforcement powers Accountability measures Transparency/reporting requirements Routes for redress
⚖️ We cannot stay ahead of these harms without robust AI regulation. Currently, we are chasing after individual tragedies and scandals, attempting to plug the gaps with existing laws and regulation (image).
This isn’t enough: we need to manage harms at the source, and not just manage the symptoms.
🤔Are these the AI futures you were hoping for?!
At Ada towers we've been doing a lot of reflecting on the recent Grok scandal.
It shows what happens when AI capabilities outpace our ability to manage their risks, & when people and societal impacts aren't front of mind for those developing tech.
🧵