Pre-built Starlark schemas for the major Crossplane providers are live:
-> Kubernetes
-> AWS
-> GCP
-> Azure RM
-> Azure AD
One load() call away.
buff.ly/Z0ZvmZT
#crossplane #opensource
Posts by Working Wombat ๐ฆซ
Shipped starlark-gen โ a CLI that turns K8s and Crossplane CRDs into typed Starlark schemas.
Wrong field name? Caught at construction time.
Wrong type? Caught.
Missing required field? Caught.
Load them with a single load() call.
buff.ly/W8Ey69N
#crossplane #opensource
function-starlark is live.
Write Crossplane compositions in Starlark โ a hermetic, Python-like language.
-> Schema validation catches misconfigs before apply
-> depends_on auto-generates Usage resources
-> ~20MB memory footprint
buff.ly/UBvz8ae
#crossplane #opensource
Writing Crossplane compositions without IDE support is like coding in Notepad.
Built a VS Code extension for function-starlark:
-> Autocomplete for all builtins
-> Schema-aware completions from generated types
-> Format-on-save
#crossplane #builtinpublic #opensource
๐ฎ Next up: starlark-gen
A CLI that reads OpenAPI specs and CRDs, then generates typed Starlark schemas automatically.
load("upbound-azure:v1/storage.star", "Account")
account = Account(location="westeurope")
Typo? Caught. Wrong type? Caught. Before kubectl apply.
#crossplane #platformengineering
๐ก๏ธ New in function-starlark: schema validation.
NetworkRules = schema("NetworkRules",
action=field(type="string", enum=["Allow","Deny"]),
)
Wrong types, missing fields, typos with did-you-mean โ caught before Resource() runs. Fully opt-in.
#crossplane #platformengineering
๐ ๏ธ function-starlark progress:
-> Full feature parity with function-kcl
-> depends_on with creation sequencing
-> OCI module distribution
-> Metrics built in
-> Bytecode caching for sub-ms execution
From sidequest to something real. Still in dev, but the bones are solid.
#crossplane #opensource
๐ What if Crossplane compositions read like Python?
region = get(oxr, "spec.region", "us-east-1")
Resource("bucket", body, depends_on=[db])
No new DSL. No Go template spaghetti. No 500MB runtime.
Starlark โ Python syntax, Go speed, hermetic sandbox.
#crossplane #starlark
๐ Crossplane Issue #2072 asked for resource dependency ordering. Closed as not_planned.
function-starlark just solves it:
Resource("app", body, depends_on=[db])
Deploys in order. Deletes in reverse. Usage resources created automatically. No manual state tracking.
#crossplane #platformengineering
Sidequest continues:
โก function-starlark benchmark:
-> 7.4x faster than function-kcl at 50 resources
-> 7.4 MiB idle memory
-> 82 MB image
Starlark compiles to bytecode once, writes directly to protobuf. No YAML roundtrip. No heavy runtime.
#crossplane #kubernetes #platformengineering
Needed to remove internal data from git history before open-sourcing a repo.
Usually this is tedious.
Found git-filter-repo. One command:
git filter-repo --invert-paths --path secrets/
Rewrote 350 commits in 0.67 seconds. Done.
buff.ly/mLgiSxe
#git #opensource #devtools
๐ฟ๏ธ Sidequest alert.
Building a custom Crossplane composition function that lets you write your logic in Starlark.
The gap I see? Nothing out there combines Python-like simplicity with Go template speed.
So I'm making one.
#Crossplane #Kubernetes #BuildInPublic
Still a big BMAD fan. But GSD just fits differently.
For MVPs and prototyping? GSD moves faster. Uncertainties? Handled as they come up during discussion phases. Less friction.
BMAD shines when you know exactly what you're building and want it done right.
#DevWorkflow #BuildInPublic
Yes that's a good point. Also later "eat your own dog food" helps a lot in bringing up inconsistencies.
Building the same thing twice. Once with heavy planning, once without.
Early finding: the unplanned version looks cleaner and feels more intuitive.
The planned version has better reports but had Inngest complications and messier UX.
But still too early to call. ๐
#BuildInPublic #DevWorkflow
BMAD's architecture phase flagged that Supabase Edge Functions cap at ~5min.
My pipeline needs up to 10 minutes.
Without that catch, I'd have discovered this mid-sprint with code built around the wrong pattern.
That's what 2 extra hours buys you. Maybe.
#BuildInPublic #DevWorkflow
One thing GSD does right: a manual verification phase with acceptance tests.
After each execution, the AI tells you what to verify.
With AI generating code this fast, it's easy to just accept output without looking.
This forces you to actually check what got built.
#SpecDrivenDev
Honest take on Get Shit Done framework:
It felt really good. Low ceremony. Point to the PRD, it organizes requirements, builds a roadmap, and you're coding.
Each phase is just: discuss, plan, execute. Done. No extra ceremony between you and working code.
#BuildInPublic #SpecDrivenDev
The planning phase difference:
GSD: PRD to requirements + roadmap, done. Moving.
BMAD: PRD to architecture doc, UX specs. >1h longer.
>1h doesn't sound like much. The question is whether skipping it costs more.
#BuildInPublic #SpecDrivenDev
Same PRD, two approaches:
GSD - straight to requirements and roadmap. Start building.
BMAD - adds architecture decisions and UX design on top.
Same app. Different planning depth. What does the extra investment change?
buff.ly/r51qDYv
#BuildInPublic #SpecDrivenDev
Running an experiment: building the same app with two different AI dev frameworks.
One lightweight. One thorough.
Same idea, same stack, same developer.
Which planning depth actually pays off?
I'll share what I find.
#BuildInPublic #SpecDrivenDev
No company yet. No launch date. Just building.
This week on Thinkmob: freemium limits, upgrade prompts, premium waitlist, and account settings.
Curious: when did you make the jump from "building" to "actually shipping"? Company first or product first? ๐ค
Same here. Currently doing it manually with Claude and a custom prompt โ about 1 in 5 suggestions end up usable, and even those need tweaking before posting.
No perfect answer yet. I use specs (BMAD) as source of truth + model ensemble (Claude devs, Codex reviews). But I guess for now HITL and manual AccTests is the only real governor - but approval fatigue is real.
Building a side project that turns git commits into social media drafts automatically.
Watches what I ship, generates posts, lets me review them before anything goes live.
social media isn't my thing, so I'm building a tool that does it for me. solving it for myself first.
#BuildInPublic
So how do you actually protect the verification layer? Options:
โ CODEOWNERS on test files โ human approval required
โ CI alerts when test or assertion count drops
โ Write tests first, freeze them, AI only touches implementation
What's your approach? Curious how others handle this.
This isn't theoretical.
DORA found that faster AI code gen leads to "downstream chaos" for most teams. Fixing AI-generated code takes 3x longer than human code.
Speed without verification isn't productivity. It's debt.
Spec-driven development tells AI what to build.
But who watches the watchmen?
Specs protect the input. Tests protect the output. But if AI can rewrite both... you need a defense layer specs don't cover.
#SpecDrivenDev #AI #BuildInPublic
Building web apps with AI? Unit tests + Playwright is your safety net.
Unit tests = contract for your logic.
Playwright = contract for your UI.
AI changes something? Both catch it instantly. Run in CI. Every commit. Non-negotiable. ๐ก๏ธ
#SpecDrivenDev #AI
The antidote to dark flow? Spec-driven development.
Define what you're building with AI beforehand. AI drafts the spec, you refine it, AI builds against it.
No drift. No slot machine vibes. A feedback loop with guardrails ๐งญ
#SpecDrivenDev #AI #BuildingInPublic