Advertisement · 728 × 90

Posts by Dr. Eric J. W. Orlowski

13/ In other words: we need to (re)inject methodology before method.

4 months ago 0 0 0 0

12/ This is why much AI-talk today is putting the cart before the horse: instead of frantically building *whatever*, much more emphasis must be placed on what is being built; why is it being built; how is it being built; and to what end? Not 'let's build something and then see how'.

4 months ago 0 0 1 0

11/ I’m increasingly convinced “AI governance” should be talked about as a process you run, not a 'thing' you ship.

4 months ago 0 0 1 0

10/ Rule of thumb: if it can’t learn from incidents and update itself, it’s not governance. It’s a snapshot you’ll keep pointing at while the situation drifts.

4 months ago 0 0 1 0

9/ So the artefacts—model cards, risk registers, audits—should be receipts. Useful receipts! But still receipts, not the meal.

4 months ago 0 0 1 0

8/ And yes, AI isn’t a stable target. Even if the model never changes, the world around it does: new users, new edge cases, new incentives, new abuses, new politics.

4 months ago 0 0 1 0

7/ The practical bits look like: when reviews get triggered, who has veto power, how decisions get recorded (including the uncomfortable trade-offs), what gets monitored, and what happens when something breaks.

4 months ago 0 0 1 0

6/ Inside organisations: if “governance” can’t actually slow down or stop a deployment, it’s basically a vibes document with footnotes.

4 months ago 0 0 1 0
Advertisement

5/ At the national level: strategies and laws are the headline. The real story is the plumbing—who can act, when they act, what data they see, what gets enforced, what gets revised after things go sideways.

4 months ago 0 0 1 0

4/ Principles are fine, but they’re not governance. They’re just the aspirations. Governance is what happens when those aspirations meet deadlines, incentives, uncertainty, and “oh no, that’s not what users are doing with it”.

4 months ago 0 0 1 0

3/ But governance isn’t an object. It’s more like… upkeep. Ongoing work. The boring (important) stuff you keep doing because reality keeps changing.

4 months ago 0 0 1 0

2/ One thing that keeps bugging me: we talk about “AI governance” like it’s a thing you can finish. A framework. A document. A checklist. Done ✅

4 months ago 0 0 1 0

1/ I’m sitting in the NUS' Centre for International Law’s conference in Singapore right now and some of the talks so far has really made me think about AI governance and what it means in practice.

4 months ago 0 0 1 0

Thanks I hate it.

4 months ago 0 0 0 0

11/
And more conversations like this!

4 months ago 1 0 0 0

10/
There’s a long road ahead, but this is the work that matters.

More intentionality.

More grounding in lived realities.

More humility about the limits of the machine.

Fundamentally this is a human challenge, not purely a technical one. Not all challenges can be engineered away.

4 months ago 1 0 1 0
Advertisement

9/
Cultural alignment isn’t a feature to toggle.

It’s a socio-technical commitment.
And it will only work if we treat it as such; collaboratively, reflexively, and with humility about what AI cannot know.

4 months ago 3 0 2 0

8/
– foreground methodological rigour
– centre local cultural contexts
– involve social scientists + communities early
– admit the limits of current architectures
– and design for specific use cases, not mythical universals.

4 months ago 0 0 1 0

7/
This is why intentionality isn’t optional.

If we want meaningful cultural alignment, we need to build processes that:

4 months ago 0 0 1 0

6/
Most cultures, especially low-resource and oral ones, rely on:
gesture, tone, ritual, interaction, shared history, silence, embodiment…
None of that appears in typical training data.

These are all things they can't be scraped.

4 months ago 0 0 1 0

5/
Another point from the panel (and one I’ve written about as well):
LLMs see an extremely narrow window into human culture.

They learn mainly from written text, which is a tiny slice of how cultures actually transmit meaning.

4 months ago 1 0 2 0

4/
If we want culturally aligned AI, we have to design for it on purpose; not hope it emerges from scale, benchmarks, or clever prompting.

4 months ago 0 0 1 0

3/
In my own research, I often describe culture as fractal.

Zoom in or zoom out, the complexity stays.

Multiple layers, overlapping identities, situational norms.

It’s lived, embodied, contextual.

4 months ago 0 0 1 0

2/
My main point was: intentionality matters.

A lot of AI work still treats culture as something you can “vibe-code” into models by scraping more text. But culture doesn’t work like that; not in any society I’ve ever studied.

4 months ago 0 0 1 0
Post image

1/
Spoke on a panel last week about cultural alignment for low-resource languages; with some brilliant colleagues and sharp moderation by Simon Chesterman.

The discussion reminded me how much anthropology still has to offer AI.

4 months ago 1 0 1 0
Advertisement
Post image

Goodbye, Melbourne 👋🏻

4 months ago 0 0 0 0
Post image

On a panel in Melbourne today discussing #AI cultural alignment and how to best approach it.

Happy to see so many folks in the audience. Not just for my own ego, but because this is an important question for AI #justice moving forward.

Will post a more detailed update (and better pic) soon!

4 months ago 4 0 0 0

Well, yes, but it's not quite as clear cut as that.

Either way, and what I think many aren't considering is how undermining your own regulation and policies because external pressure is itself a losing move.

And that's besides whether you think the EU is under or over regulated.

5 months ago 1 0 0 0

The EU backing down in the AI Act is of course instrumentally bad inasmuch as removing safeguards (even imperfect ones) increases risk of harm.

But it is also tantamount to resigning some of the EU's sovereignty, that is, its right to enact and enforce its own laws.

What a mess.

5 months ago 0 0 1 0
Post image

Had the pleasure of visiting #Laos and host a workshop with their government and stakeholders.

Laos has a very ambitious yet self-aware agenda for how to use & govern #AI 's emergent capabilities to support their society and economy.

We will be writing a report on this shortly. Eyes peeled!

5 months ago 0 0 0 0