2/ 3. Automate the trivial; use AI to assist judgment, not replace it.
4. Break complex problems into smaller, well-defined tasks.
5. Where judgment is involved, expect to iterate.
Curious how this lines up with others’ experience.
What would you add, change, or argue against?
Posts by Kayla Lewis đź§
1/ I’ve been trying to get clearer on when AI is useful—and when it quietly leads you astray.
As a working model (very much for now):
1. Treat outputs as unverified candidates, not truths.
2. Apply AI to bounded, well-structured problems.
SMART goals assume you know what you want.
Specificity can help—but it can also paint you into a corner.
So I try WILD goals:
W — Worth a try (plausible)
I — Incomplete (under-specified)
L — Low cost (easy to try)
D — Discovering (for learning, not perfection)
Start WILD, then go SMART.
Second Phoenix and Pepper post! Added a cost model, and discovered our flashiest ship was our least profitable. It was sailing 0.006% full. 200 denarii a day to move a package that would fit in a satchel.
The fix: Fill the rest of the hold with pepper!
www.thedecisionblog.com/revenue-isnt...
I started a fake Roman trading company to learn data engineering. Basilisk venom, suspicious wizards, and a lot of garum. My first blog post about it is up! www.thedecisionblog.com/a%20fictiona...
Most evaluation systems assume their metrics work.
They don’t actually know.
Good inputs → good outputs → metric gets the credit.
Without a counterfactual, that tells you almost nothing.
Better systems use multiple independent signals and check what actually predicts outcomes over time.
One pattern I keep seeing: strategy gets replaced by broadly agreeable aspirations—things almost everyone would endorse.
If it doesn’t reduce the number of viable paths forward, it’s probably not functioning as a strategy.
Alone reading Pliny’s Natural History :)
That is an awesome tattoo! I just can't even
holy wow i didn’t know it looked so awesome!