Anthropic before deploying a new model
Posts by allie lawsen
xAI before releasing a new model
Google before deploying a new model
OpenAI before deploying a new model
I end by observing that LLMs currently really lack ownership. Though it wasn't my initial motivation for the essay, my current guess is that ownership is one of the most AI-resistant skills you can develop.
Frame 4: Ownership as project managing poorly-scoped work. This is about turning fuzzy work into concrete, trackable actions while maintaining a clear view of the right end-state.
Frame 3: Ownership as justified autonomy & trust. Building this is not about having every answer, but recognising gaps in your knowledge and being well-calibrated on when to check in versus when to move forward.
Frame 2: Ownership as default intellectual labour. When something needs to happen β whether giving feedback or solving a problem β you proactively shoulder more of the thinking work that's needed to go from observation to action.
Frame 1: Ownership as manager leverage. It's about making interactions with your manager efficient and making yourself legible. Treat the manager time you get as the valuable resource it is, and get as much out of it as you can.
Ownership is clearly important for career progression, but it's hard to articulate what it looks like or how to improve it. In the post I describe four different ways of thinking about ownership, what it looks like to do well, and how to improve.
I could have titled my new blogpost "The one skill that makes or breaks your career." I didn't, because that really isn't my style. But I do think the post contains a lot of the best general career advice I can give. π§΅ below
lawsen.substack.com/p/four-and-a...
Interesting...
Proxy-generator is here in case you're still using the other place: substack-proxy.glitch.me
If only this was the AI that had been asked for tariff setting advice.
What deep research queries have you tried and been disappointed by? Thinking about writing a post on how to use it well and it would be good to have interesting examples.
Reply with the prompt and model you used.
Great prompt engineers are good prompt engineers who mutter "skill issue" to themselves whenever they get a bad response, before editing the prompt and trying again.
Good prompt engineers:
- give specific examples
- provide relevant context
- know how to specify exactly what they want
- lean into the strengths of LLMs
- use different models for different situations
- ask LLMs for help improving their prompts
What's the difference between good and *great* prompt engineering?
π§΅ β
Because a whistleblowing function that won't get used is no function at all. Read the full post: open.substack.com/pub/lawsen/... 7/7
β’ Clear reporting procedures that they *already* know
β’ A process that feels normal rather than exceptional
β’ Protected channels that they and others can justifiably trust. 6/7
Imagine working in an AI company at crunch time. The pressure to stay quiet is going to feel enormous when your evidence isn't conclusive and raising concerns could delay an exciting launch. At minimum, Iβd want someone in that situation to have: 5/7
Without this clarity, you end up with an implicit norm of "if you trust your colleague, why would you mention it?" But this means:
β’ People who are friends with colleagues will self-censor
β’ The *felt* seriousness of sharing information becomes much higher 4/7
As a teacher, I experienced a system where reporting concerns was frictionless and expected. Contact details were on everyone's lanyard, and the process was regularly reinforced. This made reporting feel like a normal part of the job. 3/7
In my new post, I argue that whistleblowing systems must be universally known and psychologically easy to use β not just technically available β if weβre going to rely on them. 2/7
What does it take for people to actually, reliably use a whistleblowing function? 1/8
What Claude projects are you using? Any clever setups I should know about?
The full post includes example instructions for these projects, and guidance on what context to add. Adding *lots* of context is, I think, where most of the value lies. lawsen.substack.com/p/my-curren...
Grant Writeup Feedback: Helps me improve my grant evaluations and gives ideas for helping my team improve theirs. Focuses on clarity and understanding rather than the decisions themselves (I'm still making those).
Custom Coach: My troubleshooter for when I'm stuck. Gives feedback on management (from transcribed meetings), internal memos, etc. Instructions emphasise being frank and disagreeable rather than reassuring - I want the truth!
Prompt Generator: Helps structure detailed prompts for more advanced reasoning models. Uses a specific format covering goals, output format, warnings, and context.
x.com/lxrjl/statu...
RFP Draft Helper: Exactly what it sounds like - helps draft and edit RFPs. I've loaded it with published examples and internal strategy docs so it matches Open Phil's style and aligns with our team's goals.