Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
Posts by Olko Koval
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
Okay so here’s my actual Bluesky-is-dying hypothesis:
The entire web is dying. Users aren’t going from BlueSky to another site (x/insta/threada/tiktok). Users are going to chatbots.
I know traffic to news sites has cratered (like 90%). My hunch is traffic to all the social platforms is down too.
A black and white soccer ball with handwritten numbers resting on a crumpled red plastic tarp, urban documentary photography.
Random things left behind in the city can form such visually pleasing, quiet compositions. Finding unexpected color and texture on the streets of The Hague.
Fujifilm xt5
Fujinon 23mm 1.4
#xt5 #fujifilmxt5 #fujinon23mm14
Lovely colors and composition!
Swans chilling in The Hague park
Finally warm in The Hague 🦢
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
brew tap oleg-koval/dcli
brew install dcli
Source and docs: github.com/oleg-koval/d...
If you work with microservices or monorepos, feedback is useful.
What it solves
1. Docker housekeeping
Clean and restart Docker Compose services without remembering flags or commands.
2. Git branch resets
Batch reset multiple repositories to develop or acceptance in a single command.
I released a small open source tool I’ve been using internally: dcli.
It targets two common pain points in multi-repository environments:
Indeed! :)
So far no luck, as nobody using it, will need to think about it :)
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
I created this skill which actually did a very good job.
Now I have another idea, which I will open-source soon.
#claude Skill: Fill a portable music player with a curated, DJ-balanced random selection from your music library
github.com/oleg-koval/f...
Recently found this 4GB piece of art Walkman NWZ-B183F
I own more than 10Tb of different music on my NAS, so I decided to give AI a chance to be my music selector.
I’ll start tracking:
- % who run twice .
- % who set memory dir
- time from first run → memory dir
If memory dir correlates with retention, I’ll optimize onboarding around it.
Exactly. “Run twice” is the floor, not the goal.
Custom memory dir is a stronger signal. It shows intent and commitment, not just curiosity.
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.