Remember that feeling as a junior dev learning the ropes and wondering what’s possible? Best feeling you have in your career IMO. I never thought I’d have that feeling again, but Claude, LLMs, are giving me that feeling again, and it’s exciting. I’m wondering what’s possible all over again.
Posts by Michael Gray
I’ve been having a whole heap of fun with Claude and Opus 4.5. Really impressive stuff. The speed at which we can get feedback has never been faster.
The Staff+ role is a fascinating place to be. Sometimes you feel accountable for everything yet nothing at the same time.
I would really love to see some reputable studies on the use of AI in software engineering I could read and reference.
Does anyone have any steers on where I should look?
I often find this role is misunderstood and, in my experience, one of the reasons “management” question the AAP. The reality is all of the decisions would’ve have happened anyway, just without the transparency and appropriate conversations happening. Take your pick, I know which I’d prefer.
Just a small amount of facilitation training would go a huge way in a lot of orgs.
GenAI is a lot like an organisation with low psychological safety.
It will try its best to please you, it won’t challenge you, and when I can’t please you with the truth it will make something up that’s untrue just to satisfy your demands .
Yes I am on a train with too much time in my hands.
All very boring isn’t it
100% this
We need to get better at working with complexity, not flattening it just to feel in control and “productive”. That’s where real judgment, and leadership live.
Anything less isn’t simplification, it’s avoidance. And in my book, that’s piss poor leadership.
It’s easier to follow a familiar metric than to re-engage with the messy, complex reality it once tried to represent.
The real danger? Once a metric’s in place, it builds inertia. It shapes how we talk, what we value, and what we ignore. Even when it’s no longer useful, we keep following it.
“Data-driven decisions” sound smart, but too often it just means letting incomplete or poorly framed data make the decision for us. We forget data is only ever a piece of the picture, useful, but never the whole story.
But in simplifying it all, we lose the nuance that actually matters. We stop thinking deeply, because the numbers look “fine.”
We’ve got this habit, especially in work and leadership, of turning complex, messy, human stuff into clean, objective metrics. It makes things feel manageable, like we’re in control.
“Vibe coding”…
Phrases like this just make me want to get out of tech. My god.
A bit dramatic, I know, but, I don’t love the direction our industry is headed at the moment. Short-termism in full effect. It’s going to be an interesting few years, that’s for sure.
No easy fix, but it’s a cycle that keeps repeating. Would love to hear from folks who’ve seen this done well.
Maybe the answer isn’t big bang migrations but an evolution over revolution approach, gradually modernising and integrating systems instead of assuming a clean break will ever be realistic.
The result? Companies end up supporting multiple systems indefinitely, missing out on the efficiencies they aimed for.
It’s a pattern I see all the time. A company merges, or a new system is built. “All new customers will be on the new platform!” But what about the existing ones? Migration is always underestimated in complexity, cost, and risk. So it drags, or never happens.
Just spent way too long on the phone with EE renewing broadband. The classic story: new customer deals vs. renewal pricing games. But what really stood out? How long it took them to even find my account. Turns out, I’m on their “legacy” system, it took a good 30 minutes of my time.
Psychological safety is NOT about lack of disagreement.
Psychological safety REQUIRES:
* disagreement and debate
* setting standards for behavior and performance, and enforcing them
* telling people things they don't want to hear
* courage, from the bottom up
* humility, from the top down
Thank you for your AI generated response.
“Indeed, AI remains only as good as the data on which it was trained, and the increasing volume of data on the internet being generated by AI risks making it less, not more, reliable.” - and here lies the fundamental flaw with gen AI.
Business/product uncertainty directly increases the complexity of the systems we produce, both software and organisational.
I have no hard evidence this is true but I’m convinced it is.
I can relate, I still hate being told what to do now, especially if I do no understand they why. 😅
With direct interaction, what if you reframe it from advice/instruct, to being curious and asking questions that make them think more deeply. Would that change your perspective on direct interaction?
Why do you think you lean that way?
We should. The other aspect to this is it’s part of our role to ensure the system they work within is not constraining. I.e. they feel they can. If the conditions are wrong we can encourage them all we want, but they won’t be able to.
Global vs local optimisations. The impossible balance 😅
But shaping the future means thinking beyond that, seeing what’s possible before it’s obvious.
Real change happens over time. It’s about shifting mindsets, evolving ways of thinking, and guiding people on a journey.