Posts by Matthieu Napoli
I've been deep into genealogy research for the past year. As I collect more info, I want to render each family in my tree as a nice diagram.
I built a free website that builds family trees and exports as images: family-tree-maker.mnapoli.fr
It also works with Claude/ChatGPT via MCP
Updated to my /address-pr-review skill:
- fixes are automatically pushed to the PR
- threads are marked as resolved (unless Claude disagrees: it will explain why)
Btw this skill is awesome to let Claude review and other Claude fix, highly recommend
github.com/mnapoli/ski...
Configuring Bref to deploy Laravel storage S3 buckets got much simpler.
S3 is simple, but large file uploads straight to S3 requires:
- enabling S3 ACLs (deprecated feature, but Laravel still uses it)
- enabling cors
The new settings make it much simpler, here's a before/after.
Yeahhh! Fais attention à toi
I wrote a detailed blog post: mnapoli.fr/running-gith...
Bar graph comparing CI times for GitHub-hosted versus self-hosted Mac Mini runners, highlighting significant speed improvements in tests.
I moved my GitHub Actions to runners on my Mac Mini, my CI just got muuuch faster.
That works for me as a solo dev, I have 4 runners (could probably add 4 more). For teams I wouldn't recommend it tbh, I'd look into Depot CI or Runs-On, that'd make much more sense.
good thing I posted, I'm changing to this:
?
Instead of bloating environment variables, I've created an `envMap()` helper in Laravel config.
This is for values that :
- are not secrets
- have hardcoded values depending on the environment
Thoughts?
When you create a private network (VPC) with Bref Cloud, you can now copy the IP addresses used for outgoing traffic.
e.g. if you need to configure an API to only accept traffic from your app's IP address
Handing Bref on a floppy disk to a GitHub stand at the AWS Summit Paris
with all the supply chain attacks, I'm pushing code to GitHub the secure way
Released exspec 0.3:
- switched from Playwright MCP to CLI -> faster, less tokens
- claude runs the specs as test, but sometimes the spec is not clear -> claude now suggests recommendations for confusing tests
- more efficient, more logs, more reliable
github.com/mnapoli/exs...
🤣
Merci @mnapoli.bsky.social 🤩
Les attaques de supply chain évoluent, méfiez vous
I think I've got something wrong somewhere, but I can't pinpoint it for sure 🤔
thanks! if you try it let me know how it goes
I'm still careful and I want to scale up testing the idea, that's why I'm sharing publicly. Try it out in your projects and report back!
- It uses the website UI and nothing else, it's like a human QA -> it cannot edit files, cannot touch the database, or run code
- It reports failure with plenty of details, making it super easy to create issues from failures
- AI being very resilient, it's able to figure things out and recover (e.g. it needs to create a user but the email is already used -> no problem, it tries again with a different email address)
I am cautious with letting AI "interpret" how to execute each step. But so far I'm pleasantly surprised :
- Claude seems to respect the steps (and not fake it)
To be clear, it comes on top of unit tests, it doesn't replace them. It's great for covering happy paths and important business logic. It helps increase confidence, and also helps specifying new features.
`npx exspec` runs Claude Code under the hood (with your existing Pro/Max subscription) with Playwright, all built-in. So using exspec is as easy as running PHPUnit/Pest/Vitest/… tests, nothing to install.
2. serves as acceptance criteria (Claude/Codex runs them after implementing)
3. serves as regression testing: run the full test suite every night
Write plain text tests, i.e. revive Cucumber/Behat (the "gherkin" language), BUT without the implementation step. AI run the tests in a browser, like human QA would.
I'm creating exspec → github.com/mnapoli/exspec
Been trying it out for 3 weeks:
1. serves as specs for new features
`npx expect` CLI with feature tests written with the Gherkin language
[experimental] trying something new with testing, looking for feedback
Problem: coding accelerates, I can't keep up with tests. Either I review everything and we're not moving faster, either I don't and don't trust them
Idea: executable specs
This is built on Tauri, so super lightweight and fast to open.
(and don't @ me about vim or whatever terminal editor, I'm not _that_ old 😂)