13/ In other words: we need to (re)inject methodology before method.
Posts by Dr. Eric J. W. Orlowski
12/ This is why much AI-talk today is putting the cart before the horse: instead of frantically building *whatever*, much more emphasis must be placed on what is being built; why is it being built; how is it being built; and to what end? Not 'let's build something and then see how'.
11/ I’m increasingly convinced “AI governance” should be talked about as a process you run, not a 'thing' you ship.
10/ Rule of thumb: if it can’t learn from incidents and update itself, it’s not governance. It’s a snapshot you’ll keep pointing at while the situation drifts.
9/ So the artefacts—model cards, risk registers, audits—should be receipts. Useful receipts! But still receipts, not the meal.
8/ And yes, AI isn’t a stable target. Even if the model never changes, the world around it does: new users, new edge cases, new incentives, new abuses, new politics.
7/ The practical bits look like: when reviews get triggered, who has veto power, how decisions get recorded (including the uncomfortable trade-offs), what gets monitored, and what happens when something breaks.
6/ Inside organisations: if “governance” can’t actually slow down or stop a deployment, it’s basically a vibes document with footnotes.
5/ At the national level: strategies and laws are the headline. The real story is the plumbing—who can act, when they act, what data they see, what gets enforced, what gets revised after things go sideways.
4/ Principles are fine, but they’re not governance. They’re just the aspirations. Governance is what happens when those aspirations meet deadlines, incentives, uncertainty, and “oh no, that’s not what users are doing with it”.
3/ But governance isn’t an object. It’s more like… upkeep. Ongoing work. The boring (important) stuff you keep doing because reality keeps changing.
2/ One thing that keeps bugging me: we talk about “AI governance” like it’s a thing you can finish. A framework. A document. A checklist. Done ✅
1/ I’m sitting in the NUS' Centre for International Law’s conference in Singapore right now and some of the talks so far has really made me think about AI governance and what it means in practice.
Thanks I hate it.
11/
And more conversations like this!
10/
There’s a long road ahead, but this is the work that matters.
More intentionality.
More grounding in lived realities.
More humility about the limits of the machine.
Fundamentally this is a human challenge, not purely a technical one. Not all challenges can be engineered away.
9/
Cultural alignment isn’t a feature to toggle.
It’s a socio-technical commitment.
And it will only work if we treat it as such; collaboratively, reflexively, and with humility about what AI cannot know.
8/
– foreground methodological rigour
– centre local cultural contexts
– involve social scientists + communities early
– admit the limits of current architectures
– and design for specific use cases, not mythical universals.
7/
This is why intentionality isn’t optional.
If we want meaningful cultural alignment, we need to build processes that:
6/
Most cultures, especially low-resource and oral ones, rely on:
gesture, tone, ritual, interaction, shared history, silence, embodiment…
None of that appears in typical training data.
These are all things they can't be scraped.
5/
Another point from the panel (and one I’ve written about as well):
LLMs see an extremely narrow window into human culture.
They learn mainly from written text, which is a tiny slice of how cultures actually transmit meaning.
4/
If we want culturally aligned AI, we have to design for it on purpose; not hope it emerges from scale, benchmarks, or clever prompting.
3/
In my own research, I often describe culture as fractal.
Zoom in or zoom out, the complexity stays.
Multiple layers, overlapping identities, situational norms.
It’s lived, embodied, contextual.
2/
My main point was: intentionality matters.
A lot of AI work still treats culture as something you can “vibe-code” into models by scraping more text. But culture doesn’t work like that; not in any society I’ve ever studied.
1/
Spoke on a panel last week about cultural alignment for low-resource languages; with some brilliant colleagues and sharp moderation by Simon Chesterman.
The discussion reminded me how much anthropology still has to offer AI.
Goodbye, Melbourne 👋🏻
On a panel in Melbourne today discussing #AI cultural alignment and how to best approach it.
Happy to see so many folks in the audience. Not just for my own ego, but because this is an important question for AI #justice moving forward.
Will post a more detailed update (and better pic) soon!
Well, yes, but it's not quite as clear cut as that.
Either way, and what I think many aren't considering is how undermining your own regulation and policies because external pressure is itself a losing move.
And that's besides whether you think the EU is under or over regulated.
The EU backing down in the AI Act is of course instrumentally bad inasmuch as removing safeguards (even imperfect ones) increases risk of harm.
But it is also tantamount to resigning some of the EU's sovereignty, that is, its right to enact and enforce its own laws.
What a mess.
Had the pleasure of visiting #Laos and host a workshop with their government and stakeholders.
Laos has a very ambitious yet self-aware agenda for how to use & govern #AI 's emergent capabilities to support their society and economy.
We will be writing a report on this shortly. Eyes peeled!