What did I miss?
And if you've been through the Chrome Store review process, what's your number one tip?
Posts by Conor
5/ Your privacy policy needs to be bulletproof.
Reviewers scrutinize this. Be explicit.
What data do you collect?
Where does it go?
How long do you keep it?
Is any of it shared with third parties?
If you can't answer all four clearly, your policy isn't ready.
Be short, specific, and honest.
4/ Testing across Chrome profiles matters.
Reviewers can test fresh installs, incognito mode, different permissions levels. Edge cases you might have missed.
Not just the working path you've been running locally.
So test like a reviewer, not like a user. Try breaking it.
3/ Your web store screenshots and description are your first impression.
Reviewers see these before they even run your extension.
If it looks unclear, then they're already skeptical.
Show real use cases. Make it obvious what your extension does.
Clarity = faster approval and more installs.
2/ Your manifest and permissions matter more than your code.
Reviewers aren't auditing your logic, they're auditing your permissions.
Bad code = slower performance.
Bad manifest = rejection.
Every unnecessary permission is a bad signal.
Bad code is slow. Bad manifest gets rejected.
1/ Chrome store review processes are slower than you think.
Reviews rarely happen in 24 hours.
If your extension touches user data in any way, it could be 3+ days before your approved.
Plan the week around your release, not just the day of submission.
5 things I wish I knew before submitting an extension to the Chrome Store:
#buildinpublic #indiedev #webdev
What did I miss?
And if you've been through the Chrome Store review process, what's your number one tip?
5/ Your privacy policy needs to be bulletproof.
Reviewers scrutinize this. Be explicit.
What data do you collect?
Where does it go?
How long do you keep it?
Is any of it shared with third parties?
If you can't answer all four clearly, your policy isn't ready.
Be short, specific, and honest.
4/ Testing across Chrome profiles matters.
Reviewers can test fresh installs, incognito mode, different permissions levels. Edge cases you might have missed.
Not just the working path you've been running locally.
So test like a reviewer, not like a user. Try breaking it.
3/ Your web store screenshots and description are your first impression.
Reviewers see these before they even run your extension.
If it looks unclear, then they're already skeptical.
Show real use cases. Make it obvious what your extension does.
Clarity = faster approval and more installs.
2/ Your manifest and permissions matter more than your code.
Reviewers aren't auditing your logic, they're auditing your permissions.
Bad code = slower performance.
Bad manifest = rejection.
Every unnecessary permission is a bad signal.
Bad code is slow. Bad manifest gets rejected.
1/ Chrome store review processes are slower than you think.
Reviews rarely happen in 24 hours.
If your extension touches user data in any way, it could be 3+ days before your approved.
Plan the week around your release, not just the day of submission.
Exactly, I ban what I already know as common AI traits.
Then allow the user to expand upon that with any specific constraints they feel gives them better output.
I agree a perfect persona can still sound like a chatbot. But in my case, the user has a personalisation page, this raw data is fed into the prompt when generating an idea for the user.
The persona is only 1 element of the overall prompt.
I'm building all of this into The Daily Ship.
A tool that generates post ideas in your voice using your profile, git commits and chosen trends.
Implementing these features genuinely changed the outputs for the better.
I find myself saving post ideas before I've even finished building the app.
5/ Per generation overrides
Your users profile should stay constant.
But each generation might need a different customization.
Profile = who they are
Per generation settings = what they want right now
This way users can generate completely different outputs without touching their profile.
4/ Negative constraints
Telling the AI what NOT to do is more important than telling it what to do.
AI has default writing habits. If you don't ban them, they show up every time.
Let users define what they don't want, then manually ban the most common ones you find yourself.
3/ Anti-repetition
AI has no memory between generations.
Without context, it repeats itself.
So I feed the users recent outputs back into each prompt with instructions to:
โข use different opening words
โข vary the sentence structure
โข contrast the tone and angle
2/ Compiled summaries
Don't dump raw user data into prompts.
Instead, use AI to compile detailed inputs into something structured.
In the prompt, the raw data reinforces the summary, but the summary is the source of truth.
1/ Prompt section ordering
LLMs focus on the start and end of a prompt.
The middle can get diluted.
So I structure every prompt the same way:
Top โ who the user is and how they write.
Middle โ constraints and context.
Bottom โ the task to complete.
The best way to humanize AI outputs?
Give the AI a better identity to work from.
Here's what I found actually matters for making AI sound like you. ๐
#buildinpublic #indiedev #AI
After a lot of trial and error I've found this to be the best method. I used this exact method for my current project.
The Daily Ship
A tool to help devs post content consistently.
It uses your customized profile, git commits and chosen trends to generate daily expandable ideas.
6/ Audience validation
Before writing a line of code, I post about it somewhere.
A post explaining the idea and the problem it solves.
If nobody cares, that's data. If people are interested, that's also data.
Something about saying it out loud makes it real.
It feels harder to quit.
5/ Tech stack
I want to know what tool makes sense for each specific part and why.
Things like
-Auth
-SEO optimization
-Database structure
-Payment handling
-Third party integrations
This saves me from making decisions on autopilot and ending up with a bad setup halfway through.
4/ High-level plan
If the idea survives, give it some shape.
I use AI to hash out a basic plan.
How the app works, how it can be different, the core user flow, monetization methods.
Not in depth on anything yet, most of it will change later.
Just enough to understand what you're walking into.
3/ Competitor research
I use AI to find existing solutions, and get it to breakdown differences and misses in those solutions.
Then do my own digging on Product Hunt, Reddit, X, Indie Hackers, etc.
If the space is too crowded, I'd rather know now than 3 weeks into building.
2/ Get a rough overview
Once the AI knows how I'm thinking.
I ask for a basic walk through of the idea.
I want to know if this idea actually makes sense?
Is the problem real?
Is the solution logical?
Is there a better way?
A quick sanity check before I invest more time.
1/ Prime the AI
Before I ask anything, I give the AI full context on how I'm thinking.
My stack, my goals, how I like to build, who the app is for and specifically what problem it's solving.
I've found skipping this creates large gaps in the AI's understanding of your project.
The 6 steps I use with AI to go from idea to actual plan for a new project ๐
#buildinpublic #indiedev