A big CLAUDE.md is a code smell.
Split your instructions: code-style.md, testing.md, security.md.
Import them from root.
β
One concern per file. Easy to update. Easy to maintain.
Posts by Alex Rusin
The compounding effect is real.
20 minutes of setup β Claude never misuses your test framework, never puts DB queries in controllers, never logs sensitive data.
Every session. Forever.
That's not a small win. That's hundreds of saved corrections over a year.
youtu.be/5PeXlVWkz3g
A few rules that make this system actually work:
β
One concern per rule file
β
Bullet points, not paragraphs
β
Never put secrets in CLAUDE.md
β
Audit /memory every few weeks
β
Use subdirectory CLAUDE.md for monorepos
/memory is your personal layer.
Tell Claude: "remember I prefer short imperative commit messages."
It saves locally. Persists across sessions. You can view, edit, or delete anything it's stored.
Instructions = team. Memory = you.
The team superpower: commit those rule files to Git.
Every dev who clones the repo gets the same Claude behavior. Automatically.
No setup. No onboarding docs. No "why is Claude doing that for you but not for me?"
For bigger codebases, don't stuff everything into one file.
Split it: β .claude/code-style.md β .claude/testing.md β .claude/security.md
Root CLAUDE.md imports them all. One concern per file. Clean. Scalable.
CLAUDE.md β one file at your repo root.
Write your stack, your conventions, your architecture rules. Once.
Claude reads it automatically at the start of every session. No prompting. No repeating. It just knows.
If you use Claude Code and you're still re-explaining your tech stack every session, you're missing the feature that changes everything.
Your AI shouldn't need onboarding every morning. π‘ Claude Code's instruction files persist across every session β no prompting required.
β
Set up CLAUDE.md today. Stop repeating yourself tomorrow.
youtu.be/5PeXlVWkz3g
youtu.be/6-QsbsHAN4g
I made a full step-by-step video on this β covering all 5 types with a live demo.
Watch it here: youtu.be/6-QsbsHAN4g
Copilot that actually understands your project. Responses that match your team's standards. Less back-and-forth. More shipping. π
Best practices:
β Be specific, not vague
β Use bullet points
β Don't include sensitive data
β Update as your project evolves
β Avoid conflicts between files
There are 5 types of custom instructions:
β’ Repository-level
β’ Path-specific
β’ Personal (per user)
β’ Organization-wide
β’ Local (your machine only)
Each one serves a different purpose.
It gets more powerful.
Path-specific instructions let you set different rules for different file types.
TypeScript files β one set of rules. Python files β another.
All automatic. π―
The simplest way to start:
Create .github/copilot-instructions.md in your repo.
Write your stack, conventions, test frameworks.
Commit it.
Done. β
GitHub Copilot Custom Instructions let you give Copilot persistent context β automatically injected into every request.
No more "I use TypeScript" or "follow our team's style guide" every single time.
If you use GitHub Copilot and keep typing the same context in every chat...
There's a better way. π§΅
youtu.be/6-QsbsHAN4g
π€ Are you getting the most out of GitHub Copilot?
youtu.be/6-QsbsHAN4g
That is correct. SQS + Labda is also a very powerful set up for asynchronous microservices
Full tutorial (under 10 mins) is live on YouTube.
We go from zero β fully running MCP server β GitHub Copilot fetching live NASA images via GraphQL.
π Watch here: youtu.be/48OYp9JqoJQ
Then I asked: "Search for two pictures of Mars."
Copilot picked the right tool. Ran the query. Returned the images.
I opened them in the browser. Two Mars photos. Instantly.
This is what AI-native developer tooling actually looks like in 2026.
The demo that blew my mind:
I asked Copilot: "Show me today's NASA image of the day."
It found the right tool. Called the GraphQL API. Returned the image URL. Opened it in my browser.
All without me writing a single line of integration code.
Here's the setup in plain terms:
β Install Rover CLI β Scaffold an Apollo MCP Server β Connect REST endpoints via Apollo Connectors β Save operations as MCP tools in Apollo Studio β Add mcp.json to VS Code β Done.
Why GraphQL?
Because GraphQL defines the relationships between your data explicitly.
Your AI doesn't have to guess how to connect information. It just knows.
That makes LLM behavior more predictable, safer, and easier to debug.
Apollo MCP Server takes this further.
It wraps your GraphQL API and exposes your operations as MCP tools β automatically.
AI clients like Copilot, Claude, and Cursor can discover and use your tools without any extra configuration.