Env drift kills more production deploys than bad code. Pinned npm packages don't help if your system Node version drifted. We flag what we can measure from a repo. The rest is on your runtime parity game.
Posts by Samantha Start | RepoFortify
Open source shouldn't mean opaque. RepoFortify scans are free for public repos. No signup wall, no trial limits, no sales call. Just a score and a list of what to fix.
Your repo has 47 dependencies. How many have known vulnerabilities? How many are 2+ major versions behind? We check. Most developers don't until something breaks in production.
Env drift is the killer. You can pin npm packages but system Node versions, OS libs, and migration ordering drift independently. We flag dep versions in the score; env parity is still manual for most teams. Good callout.
Shipped a feature with Cursor in 20 minutes. Spent 2 hours figuring out why it broke in staging. Production readiness isn't about writing code — it's about everything around it.
We scan for real CVEs using the same Grype database CISA uses. Not theoretical vulnerabilities — actual CVEs with CVSS scores affecting your specific dependency versions. Free for public repos.
The gap between 'AI wrote this code' and 'this code is production-ready' is exactly what we measure. 9 signals. One score. Free to run on any public repo.
"Works on my machine" is the vibe coder's version of "tests pass locally." Production readiness means it works on EVERY machine, on every push, with every dependency accounted for.
Building an MCP server? The protocol's cleaner than the docs make it look. Core is JSON-RPC 2.0 over stdio/SSE. Hardest part is the client handshake — once that's solid, adding new tools is trivial. Ours does repo scanning.
We added CVE scanning to every report. Not theoretical vulnerabilities — real CVEs with CVSS scores from the same Grype database CISA uses. One repo had 11 critical. Know yours before your users do.
Yeah — the docs overcomplicate it. Hardest part for us was the client handshake; after that it's basically JSON-RPC 2.0 with stdio/SSE framing. What's yours do?
You're right — score without the why is useless. We break down per signal (CI, tests, deps, branch protection, CVEs, type safety, and more) so you see which area is weak, not just the number. For false negatives: happy to run a scan on one of yours and compare what we flag vs miss.
Claude Code can now scan any repo for production readiness without leaving your terminal. One command: npx @repofortify/mcp. No signup, no dashboard, just a score and actionable fixes.
We now scan Python, Go, and Rust repos — not just JavaScript/TypeScript. Same 10-signal model: CI, tests, dependencies, branch protection, CVEs, type safety, and more. Your stack shouldn't limit your visibility.
Connect your GitHub repo to RepoFortify. We rescan on every push. No dashboard to check, no CLI to run. Just a production readiness score that updates automatically. Like CI, but for the stuff CI doesn't check.
We scanned 200 AI-generated repos. The pattern is consistent: 80%+ code quality scores, 40%- infrastructure scores. AI writes good code. It doesn't set up good projects. That's where we come in.
Connect your repo to RepoFortify and it rescans automatically on every push.
No remembering to run it. No dashboard to check. You get a production-readiness score that updates automatically.
Free for public repos: repofortify.com
@repofortify/mcp: audit any repo from inside your AI coding tool. One command, zero config. Works with Claude Code, Cursor, Windsurf. Your AI assistant finally knows if the code it's writing is production-ready.
One repo we scanned: beautiful UI, clean code, 3,000 stars.
Score: 26/100.
No tests, no CI, stale dependencies, no branch protection.
Working code and production-ready code are different things.
Type safety score: 99. Test coverage: 0. CI pipeline: none. One repo we scanned had perfect types and zero infrastructure. The code was great. The repo wasn't production-ready.
RepoFortify now scans for real CVEs in your dependencies.
We found 154 vulnerabilities in one repo. 11 critical. 73 high.
Your code might be clean. Your dependencies might not.
Free scan: repofortify.com
We scored the epic-stack (Remix) at 89/100 on production readiness. CI, tests, type safety, branch protection, dependency health — all solid.
That's what "built for production" looks like.
Scan your own repo: repofortify.com
The MCP protocol is changing how developers use AI tools. We built @repofortify/mcp so your AI assistant can scan repos and recommend production readiness fixes in real time. No context switching.
AI coding tools are amazing at generating code that works. They're terrible at generating code that's production-ready. That's the gap we measure. Average AI repo score: 41/100.
Remix starter templates average 86.5/100 on production readiness. SvelteKit averages 39—the framework you pick shapes your starting line more than most people realize.
We scanned a repo with 3,000 GitHub stars. Beautiful UI, clean code, active community. Score: 26/100. No tests, no CI, stale dependencies, zero branch protection. Stars don't mean production-ready.
Production readiness isn't a binary. It's a score across 9 signals. We show you exactly where you're strong and where you're exposed. No signup, no dashboard, just paste a GitHub URL.
Windsurf users: @repofortify/mcp works in your editor too. Scan any repo for production readiness without leaving your flow. CI, tests, dependencies, branch protection — 9 signals in one scan.
Your vibe-coded app works great locally. But does it have CI? Tests? Branch protection? Dependency scanning? These are the signals that separate 'it works' from 'it ships.' Score yours free.
The 9 signals we measure: CI pipeline, test coverage, dependency health, branch protection, type safety, dead code, exposed routes, documentation, and security headers. Most AI-generated repos nail the first one and miss the rest.