Advertisement · 728 × 90

Posts by Mathias Verraes

Three classic 20th century moves: Pushing employees to be more productive, measuring the wrong thing, and optimising for the metric instead of the goal

6 days ago 3 1 1 0

Came here to say exactly that

1 week ago 0 0 0 0

You sound surprised

1 week ago 1 0 1 0

I wonder who first injected the George Box quote into the DDD community (where it has become its defining meme). Was it there from the beginning? Is it in the blue book?

2 weeks ago 1 1 3 0

Totally forgot about France is bacon!

1 week ago 1 0 0 0

Of course if you find no other evidence, my ego will happily accept the honour of being the first to make the connection 🤣

1 week ago 1 0 2 0

But so my first hypothesis would be that enough people knew the quote for it to organically become part of DDD, without necessarily a clear first actor.

1 week ago 1 0 1 0
Advertisement

I also remember Eric Evans mentioning in conversation something about avoiding to find an elegant model, and instead looking for a useful one. This was before DDDEU as well.

1 week ago 1 0 1 0

I think I did come up with "all models are wrong but some harmful", and definitely the bit about how the original quote should not be read as an excuse to accept your first model, but a call to action to find more useful ones.

1 week ago 1 0 1 0

I was probably already using it in my workshops in 2014, 2015... I don't remember where I got originally, and I don't know if I'm the one who popularised it. My impression was that it's something many people know about anyway, also outside of software. Ctd.

1 week ago 1 0 1 0
Preview
Maybe you should have bought an electric car The Iran War is illustrating the cost of anti-EV nonsense.

As fossil fuel prices have shot up, it’s a good time to consider going electric. I’ve had an electric car for a decade and a heat pump for a year, and I’m very happy with both — simply all-around better technology than burning stuff. www.noahpinion.blog/p/maybe-you-...

3 weeks ago 73 18 10 2

Appreciate having second thoughts. It means you’re already two thoughts ahead of almost everybody else.

3 weeks ago 5 1 0 0
Trump lies on the coach at a psychiatrist's office. The psych, with a notebook, asks Trump if the Iranian negotiators are in the room with them.

Trump lies on the coach at a psychiatrist's office. The psych, with a notebook, asks Trump if the Iranian negotiators are in the room with them.

"And are those Iranian negotiators here in the room with us?"

3 weeks ago 1 0 0 0
Preview
How the Iran war has sent shocks rippling across the globe From restaurant closures in the Philippines and petrol rationing in Sri Lanka, to Asian food production crises due to fertiliser shortages, the effects of the US-Israeli war on Iran reverberate around...

www.theguardian.com/world/2026/m...

1 month ago 1 1 0 0

I need centrists to get fucking 💯 on-board fighting fascism.

No more “but…”

Get fucking on-board.

1 month ago 41 9 0 0
Advertisement
nigel farage tweet:

The Bank of England is replacing Winston Churchill with a picture of a beaver on our bank notes.
This is the definition of woke.

nigel farage tweet: The Bank of England is replacing Winston Churchill with a picture of a beaver on our bank notes. This is the definition of woke.

i, for one, am delighted that we finally have a definition of “woke”

1 month ago 8880 1562 423 319

Happy international women's day! May our daughters grow up in a world where we don't need an international women's day.

1 month ago 5 0 0 0

How to get Epstein out of the headlines?
1̶.̶ ̶B̶l̶a̶m̶e̶ ̶d̶e̶m̶s̶
̶2̶.̶ ̶C̶a̶l̶l̶ ̶i̶t̶ ̶a̶ ̶h̶o̶a̶x̶
̶3. Invade a country

1 month ago 2 0 0 0

Thanks, that seems to be the one, based in some quick googling. A company is using this in their candidate assessment. Testing for autism, without disclosing that and using a made up name, seems unethical and possibly illegal.

1 month ago 0 0 1 0

However, I can't find anything about, so I suspect the official name is something else. Does this ring a bell?
I want to find out if it's proper science.

1 month ago 0 0 1 0

Has anyone heard of "Perceptive Collaboration Index (PCI)"? It's supposed to be a test where the subject has to judge the emotions of people based on pictures of their eyes, and it scores their ability to work in teams. (ctd)

1 month ago 0 0 1 0

It only took us 10 years to get Martin Fowler to come to DDD Europe 🤩 Next June in Antwerp
dddeurope.com

1 month ago 14 0 1 0
Advertisement
Post image

☝️ LAST DAYS for Early Bird sales! After tomorrow, tickets to Domain-Driven Design Europe 2026 will cost more.
Don't wait another moment: buff.ly/J6TUj57

1 month ago 1 1 0 0

One of them is futile

1 month ago 1 0 1 0

Ah too bad! I wasn't involved with the selection this year, but we had 600 submissions

2 months ago 1 0 2 0

Splitting a Domain Across Multiple Bounded Contexts by Mathias Verraes @mathiasverraes.bsky.social

verraes.net/2021/06/spli...

Design and Reality : Essays on Software Design by Rebecca Wirfs-Brock, Mathias Verraes

leanpub.com/design-and-r...

2 months ago 2 1 0 1

I do not fear the rise of superintelligence.

I do, however, fear the rise of billionaires, organizations, and world powers who seek to use computing to maximize their power, influence, and control.

2 months ago 243 58 7 2

"Stay out of politics" is almost always an authoritarian statement. It attempts to shut down public debate (a cornerstone of democracy) in favour of the person making the demand.

2 months ago 8 2 0 0
About the PhD: 
Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems — and the broader context that AI systems operate in — have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent “ground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

3 months ago 190 139 6 12

This! And if we try three more things and none of them work, it's not waste, it's validation

2 months ago 3 0 0 0