Quick ask, then back to your feed:
@VetsWhoCode is 100% donor funded. Every veteran we train is free to them and not free to us.
If you've gotten value from anything I've posted this month — a framework, a hot take, a reminder to ship — consider sending us $10. Thanks you.
Posts by Jerome Hardaway
Hiring manager: "Tell me about a time you used an LLM in production."
Wrong answer: "I built a chatbot demo."
Right answer: "I built X. Users complained about Y. I added evals to catch it. Bug rate dropped from Z% to W%. Here's the repo."
Specifics. Failures. Fixes. Numbers. That's what wins offers.
Rule we live by inside @VetsWhoCode:
Ship something every Friday. Anything. A script. A blog post. A pull request. A demo video. A failed experiment with notes.
52 things per year. 156 in three years. By then you don't need to apply for jobs. They find you.
If you're reading this in 2026:
The window where "I just learned this last year" is an acceptable answer in an AI engineering interview is closing. Not closed. Closing.
Every month you wait, the bar moves up. Every month you ship, your odds compound. The math is brutal and it's also fair. Get to it
If a veteran asked me where to get serious about AI engineering in 2026, my answer wouldn't be a book or a Coursera certificate.
It would be @VetsWhoCode. Free. Mentored by working engineers from Google, Microsoft, and GitHub. A decade of placements behind it.
vetswhocode.io
The smallest useful AI project you can ship this weekend:
A script that takes a folder of your own PDFs, lets you ask questions about them in the terminal, and answers with citations.
~150 lines of Python. Touches embeddings, vector search, retrieval, and prompting. That's a real RAG system.
"Why learn to code if AI will do it?"
Because the people directing the AI need to understand what it's producing. Someone has to architect the system, review the diff, catch the subtle bug, and own the outcome.
AI raised the floor on what one engineer can build. It also raised the ceiling.
Underrated career move: comment thoughtfully on 3 AI engineers' posts every day for 30 days. Real comments. Real questions. Not "great post."
By day 30, you'll have warm relationships with 90 people in the industry. That's how jobs actually get filled.
What does an AI engineer actually do all day in 2026?
– 30% writing code (Python, mostly glue between APIs and data)
– 20% writing and running evals
– 20% in meetings about prompts, costs, and edge cases
– 15% reading logs from yesterday's failures
– 10% talking to users
– 5% reading papers
Veterans, listen:
The single biggest unlock for transitioning into AI engineering is building in public. One project. One repo. One thread of progress posts.
Recruiters don't read resumes anymore. They read your GitHub and your timeline. Make sure both tell a story.
"How do I stop the LLM from hallucinating?"
You don't. You contain it.
– Ground answers in retrieved documents (RAG).
– Cite sources in the output.
– Add a verification step that checks claims against the source.
– Refuse to answer when confidence is low.
– Log everything so you can audit later.
Every AI engineer I know — including the ones at top labs — has felt like a fraud at some point this year. The field moves too fast for anyone to feel "caught up."
The cure isn't catching up. It's shipping.
Build one thing this week. You'll learn more than reading 100 papers.
Three projects every aspiring AI engineer should ship before applying to jobs:
A RAG app over a real dataset you care about. (Not the docs of a tool. A real one.)
An eval harness for that RAG app, with at least 50 test cases.
A simple agent that can take an action in the real world.
Real engineering judgment:
If a regex solves it, use a regex.
If a SQL query solves it, use a SQL query.
If a deterministic function solves it, write the function.
Reaching for an LLM when you don't need one is how you get slow, expensive, unreliable software. Use the right tool.
The mistake when learning AI at scale:
You build a slick demo on GPT-4-class models without ever looking at the bill. Then it ships. Then accounting sees the invoice.
Token economics is part of the job. Know your input/output costs per call. Cache aggressively. Use the smallest model.
When do you use what?
Prompt engineering: the answer lives in the model already. You just need to ask the right way.
RAG: the answer lives in YOUR documents. The model has to read them first.
Fine-tuning: you need the model to consistently follow a specific style, format, or domain language.
Here's something I tell every veteran walking into @VetsWhoCode:
The military trained you to test things until they break. Pre-flight checks. Pre-mission rehearsals. After-action reviews. That mindset is exactly what AI engineering is starving for right now.
Nobody talks about evals enough.
If you can't measure whether your LLM app is getting better or worse, you're not engineering. You're vibing.
First eval doesn't have to be fancy. 20 example inputs, expected outputs, a script that scores them. Done. Now you have a feedback loop.
RAG in plain English:
Step 1: take your documents.
Step 2: turn them into searchable chunks.
Step 3: when a user asks a question, find the relevant chunks first.
Step 4: hand those chunks + the question to the LLM.
That's it. That's the trick. Now go build one.
The bar for "AI engineer" in 2026 is higher than "senior backend dev" was in 2021.
That's not gatekeeping. That's reality.
The good news: the ramp is also shorter than it's ever been. The Hashflag Stack is 17 weeks — and it's built for veterans with real lives. We've watched it work.
What's the one thing you wish someone told you before you started learning AI engineering?
I'll start: "You don't need a PhD. You need a deployed app, a GitHub repo, and a story about why you built it."
Veterans — drop yours. We're building the VWC curriculum out loud.
The 17-week ramp we use at @VetsWhoCode — The Hashflag Stack:
Phase 1: Foundations. Terminal, Git, JS, Python — by hand, no AI.
Phase 2: Software Engineering. Next.js, FastAPI, real apps.
Phase 3: AI Engineering. LangChain, RAG, evals.
Phase 4: Production Mastery. Ship, monitor, scale.
The fastest way to actually understand what's happening inside the LLMs you're building on?
Build with them every day for 17 weeks under mentors who have real world experience.
That's the @VetsWhoCode AI Engineering track. No theory-only work. You build, you break, you ship. vetswhocode.io
Veterans have an unfair advantage in AI engineering and nobody talks about it:
– You're trained to operate under ambiguity
– You think in systems and SOPs
– You ship under pressure
– You don't quit when something breaks
Those are the exact traits that separate AI engineers from tinkerers.
The biggest mistake aspiring AI engineers make:
They learn 12 frameworks before they ship 1 thing.
Pick a problem you actually have. Solve it with an LLM. Deploy it. Break it. Fix it.
That single loop teaches more than a year of tutorials.
Hot take: if your entire AI skill set is "I'm good at prompting," you're not an AI engineer. You're a power user.
AI engineers ship systems. That means RAG pipelines, evals, guardrails, retries, observability.
At @VetsWhoCode, that's where we start training. Not at the prompt box.
I run @VetsWhoCode. I've watched hundreds of veterans transition into tech.
Clean code still matters. It's the floor.
The ceiling in 2026 is the engineer who writes clean code AND can take a messy business problem, hand part of it to an LLM, and ship a working answer by Friday.
Both. Always both.
What it actually takes to be an AI Engineer in 2026:
Strong Python fundamentals
Comfort with APIs and HTTP
Tokens, embeddings, context windows
RAG and vector search
Evals — knowing when your model is wrong
Notice "prompt engineering" isn't on the list. It's table stakes.
Did you know @VetsWhoCode teaches AI Governance?
NotebookLM is a great tool to build learning resources like this Fine-Tuning Playbook, where we walk through the process of customizing foundational models for specific tasks.