Advertisement · 728 × 90

Posts by Gaia Marcus

I find a lot of discourse on AI and Tech Sovereignty to be very... Circular. Great stuff from @chionwurah.bsky.social

2 weeks ago 0 0 0 0

.@chionwurah.bsky.social on need for coherence on tech 'sovreignty'.

"We need to understand what we can own, control, & lead by ourselves; what we can access that is in the hands of allies we trust; & how we manage what we must obtain from those we do not trust"
giftarticle.ft.com/giftarticle/...

2 weeks ago 4 1 1 0
Scientists invented a fake disease. AI told people it was real Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution.

www.nature.com/articles/d41...

2 weeks ago 2931 1496 50 159

Deeply grateful to the UK AI and Robotics research community for this Leadership Award. So many wonderful researchers were honoured with me tonight - my own award would not be possible without my amazing @braiduk.bsky.social @technomoralfutures.bsky.social @edfuturesinstitute.bsky.social colleagues

1 month ago 71 9 6 0
Post image

Very thoroughly earned vallor (!) - @shannonvallor.bsky.social given the Ai & Robotics Research Leadership Award!

1 month ago 5 0 0 1
Shannon Vallor's talk - ' The machine starts: taking stock of the AI and Robotics revolution '

Shannon Vallor's talk - ' The machine starts: taking stock of the AI and Robotics revolution '

It's a play on E.M. Forster's  The Machine Stops
-
“THE MACHINE IS MUCH, BUT NOT EVERYTHING."

It's a play on E.M. Forster's The Machine Stops - “THE MACHINE IS MUCH, BUT NOT EVERYTHING."

Shannon's call: WE NEED NEW AND BETTER AI/R STORIES:

Stories about Al and robots beyond GenAl chatbots and agents

Stories about humans using their own brains more with Al

Stories about Al and robots that tread lightly on our lives and minds

Stories about Al and robots that help mend social ties & trust

Stories about Al and robots that repair parts of our world & institutions

Stories about Al/R researchers and builders who ask and listen

Stories about people enabled to choose the kind of Al/R they want

Shannon's call: WE NEED NEW AND BETTER AI/R STORIES: Stories about Al and robots beyond GenAl chatbots and agents Stories about humans using their own brains more with Al Stories about Al and robots that tread lightly on our lives and minds Stories about Al and robots that help mend social ties & trust Stories about Al and robots that repair parts of our world & institutions Stories about Al/R researchers and builders who ask and listen Stories about people enabled to choose the kind of Al/R they want

Fabulous to hear from @adalovelaceinst.bsky.social board member @shannonvallor.bsky.social at tonight's Responsible AI Robotics and AI awards - I'm always so inspired by her morally crisp and targeted calls for AI that centres people, augmenting our agency and thinking, rather than supplanting it

1 month ago 6 0 1 0

TIL what the Acela is :)

And yes - very much agree!

1 month ago 1 0 0 0

I yet again would like to propose that we all carry (privacy preserving, on device, non data retaining) decible monitors on our person. Especially when conducting - often sensitive - business meetings in open plan and public places.

1 month ago 1 0 1 0

This sounds fun!

1 month ago 1 0 0 0
Advertisement
Preview
When did common sense AI policy become radical? How do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?

🧵 I recently spoke with Taylor Owen @theglobeandmail.com Machines Like Us podcast about an urgent question: How do you make a technology safe when the political will to govern it has evaporated? And what happens if we don't? www.theglobeandmail.com/podcasts/mac...

1 month ago 50 29 2 0

Modi offers his MANAV (or human) Vision for AI @ #AIImpactSummit

M: moral and ethical AI systems
A: accountable AI governance
N: national AI + data sovereignty
A: accessible and inclusive AI
V: AI should be valid and verifiable

What these principles mean in practice in India remains to be seen.

2 months ago 11 3 3 0

So excited about our new board appointments.l: Ed Humpherson, @mmitchell.bsky.social and @geomblog.bsky.social

With expertise ranging across AI research, computer science and public statistics, they are Ada values aligned w a shared focus on the public interest, accountability, fairness and rigour

2 months ago 5 0 1 0
Post image

Governments regulate AI and deploy it across public uses. This AI Policy & Governance Working Group panel @ #AIImpactSummit examines accountability, procurement and safety when the state is both regulator and user. @gaiamarcus.bsky.social @ruchowdh.bsky.social @futureoflife.org impact.indiaai.gov.in

2 months ago 18 5 1 0
Preview
National Data Library Expert Advisory Group Information about the National Data Library (NDL) Expert Advisory Group including its role and members.

For people who care about that sort of thing, the membership of the National Data Library Expert Advisory Group has been published today - including me www.gov.uk/government/g...

2 months ago 40 9 3 2

You were very good and balanced, I thought

2 months ago 2 0 0 0

Listenng to BBCTodayProgramme - grt to hear thoughtful discussion frm parents on consequences & pitfalls of social media ban - & why we should be worried about adults screen time, too (or maybe more). Very live to the qstn of where are YP supposed to be going now, w reduction of third spaces fr thm

3 months ago 1 0 0 0

if you’re passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you

3 months ago 44 40 1 0

I can choose my bank, I can choose my online supermarket, I can pick where I buy books and clothes and watch TV. I cannot choose whether or not to interact with the state, it is not the same as going shopping.

3 months ago 172 28 9 1
Preview
The mirage of AI deregulation One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Tr...

"One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation."

3 months ago 33 11 1 0
Advertisement
The Trump administration is engaged in norm destruction—breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itself—the systematic preference for executive discretion over deliberative process—reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.

The Trump administration is engaged in norm destruction—breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itself—the systematic preference for executive discretion over deliberative process—reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.

In a new commentary in Science that I think I'll be referencing a lot on Tech Policy Press, Alondra Nelson (@alondra.bsky.social) says while the Trump's approach to AI is widely understood as "deregulation," when you zoom out, that's not really what's going on. www.science.org/doi/10.1126/...

3 months ago 50 29 3 4
About the PhD: 
Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

3 months ago 190 139 6 12

...safeguards, to the decision to not have regulation that covers or partially covers entirely predictable harms.

3 months ago 0 0 0 0

I'd agree, but I'd say this is always the case. AIl technologies are a result of a series of decisions a series of people have made. In this case everything from the data models were trained on, to the capabilities that were prioritised, to (presumably?) fine-tuning, to releasing a tool without ..

3 months ago 0 0 1 0

I'd file having appropriate regulation and governance under "our ability to manage the risks" - regulation is essentially one of the tools for ensuring that those able to manage risks are held to do so. But 100% agree these aren't risks that can't be managed, they just aren't being.

3 months ago 0 0 0 0

Impossibile on which axes? As in technically or politically or both?

3 months ago 0 0 1 0
LinkedIn This link will take you to a page that’s not on LinkedIn

🏆 This is a pivotal opportunity for the UK government to distinguish itself as a leader in effective AI governance, and build a regulatory system that prevents harms before they happen.

📖 Learn more about our polling on AI regulation here: Great (public) expectations | share.google/WlX20c8lmYRD...

3 months ago 3 0 0 0
Advertisement
Stat picture card - 89% of the UK public say it is important to regulate AI independently

Stat picture card - 89% of the UK public say it is important to regulate AI independently

📢 This isn’t an unpopular idea: nearly 9 in 10 people in the UK want independent AI regulation. Yet the current oversight of AI falls far behind that of other sectors (like aviation, pharmaceuticals and financial services), with no clear plans for improvement.

3 months ago 1 0 1 0
Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors

Compared to the following:


Aviation

Financial Services

Pharma

Food Safety



Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across:

Proactive risk monitoring

Safety standards

Independent standards
Market entry authorisation

Post-market monitoring


Independent regulator


Enforcement powers

Accountability measures

Transparency/reporting requirements
Routes for redress

Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors Compared to the following: Aviation Financial Services Pharma Food Safety Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across: Proactive risk monitoring Safety standards Independent standards Market entry authorisation Post-market monitoring Independent regulator Enforcement powers Accountability measures Transparency/reporting requirements Routes for redress

⚖️ We cannot stay ahead of these harms without robust AI regulation. Currently, we are chasing after individual tragedies and scandals, attempting to plug the gaps with existing laws and regulation (image).

This isn’t enough: we need to manage harms at the source, and not just manage the symptoms.

3 months ago 2 3 2 0

🤔Are these the AI futures you were hoping for?!

At Ada towers we've been doing a lot of reflecting on the recent Grok scandal.

It shows what happens when AI capabilities outpace our ability to manage their risks, & when people and societal impacts aren't front of mind for those developing tech.

🧵

3 months ago 2 3 2 0
LinkedIn This link will take you to a page that’s not on LinkedIn

🚂🚂Mind the gap?! The public has (great!) expectations...and doesn't think AI should be seen as being A-Exceptional.🚂🚂

Our nat repr polling shows a growing divide between public expectations and discomfort w the status quo, and govt's lack lack of action

lnkd.in/e638XGXU

🧵

4 months ago 10 2 1 1