I have a huge personal announcement to share next week. I'll be joining an amazing organization in a fantastic role. Stay tuned!
Posts by Tony Alicea
I talk about a framework for tackling this problem: researched "lenses" and designed "projections" that scale human judgment without asking reviewers to read everything line by line.
I believe this is one of the big design challenges of the AI age.
tonyalicea.dev/blog/the-eva...
White text on a blue background: "The Evaluability Gap: Designing for Human Review of AI Output"
AI can produce faster than humans can evaluate. That gulf is widening, and it's causing burnout, apathy, and missed errors at scale.
This isn't a skill problem. It's a design problem.
In my new write up, I'm calling it the evaluability gap.👇
Hot take: sometimes it better to do things yourself and not have the AI do it.
Yes, AI is increasing output dramatically which is being called an increase in productivity. I argue that it is not.
Productivity was never "output quantity". It still isn't, no matter what AI overhype says.
LLMs are burnout machines. Make sure you have personal guiderails in place to protect yourself.
Your velocity does not need to match the endless token-generating probability engine.
“More features” was never the answer. Thus “more code” and “more merges” isn’t the answer either.
Figuring out the right problem to solve for your users, and using AI assistance to implement it quickly with a performant, usable, and tested design…now that’s an answer.
A student pointed out a YouTube JS course that is a ripoff of my JS course. Not “teaches similar content” but a clear ripoff, sometimes word for word.
Looks like the course is the foundation of their channel. This person now has 2 million followers. *sigh*
Saying "we use AI to rapidly implement feature requests" is another way of saying "we don't think hard about the features we build".
LLMs are definitely a great tool that allow us to ship software faster.
Shipping good software full of good ideas...well, that's another matter entirely.
It was always a psychological phenomenon, not a technical one.
A self-identity based around "I am good at this" + a strong "I care what other people think of me" = gatekeeping in whatever direction the door swings.
Whichever reality lets you maintain your self-belief.
Developers are more productive, but more exhausted, than ever. Why?
I'm calling this phenomenon "single-mode burnout." 🧠Read the full write up in my new blog post. 👇
Anyone else get to a place sometimes where you're waiting for AI so you feel like you should be doing something else, but if you do something else all the context switching will degrade your flow?
There needs to be a name for this, like "vibe paralysis".
Having AI agents mimic Agile/Scrum ceremonies is like seeing the invention of the car and deciding the best thing to do is hitch your horse buggy to it.
It takes more skill, awareness, and self-control to build less software rather than more.
Solving only the right problems, debt and security care, user research. All result in less software.
In the age of AI it’s easy to build more. It takes skill to build less.
Usability engineering and ergonomics is still a skill. So is solving the right problem the right way. Also reliability, security, performance.
None of these can be coded by just anyone.
Very cool! Can I recommend improving the semantics/accessibility of the HTML? Would be nice to encourage good practice.
What if teams could comment and compare vibe coded interactive prototypes, and the comment stored the state of the prototype when the comment was made?
Demo👇
Properly educating the next generation of developers is one of the biggest challenges for our industry today.
Is your org teaching junior devs? What do you think is working? What isn’t?
I've seen multiple companies recently adopting a "teaching hospital" approach for their developers. New developers are "interns" (not juniors) and a developer educator (i.e. in-house mentor) is a position in the company.
The industry needs *more* of this approach.
Blog post: tonyalicea.dev/blog/trace-d...
GitHub: github.com/anthonypalic...
I'm pleased to present Trace, a markdown-based declarative modeling approach to spec writing for LLM consumption.
It's designed both for experienced devs and to give new devs a focus on *what* to learn (thinking in systems) in the age of AI.
Blog post and GitHub repo: 👇
You can teach an LLM the technique, and use it to iterate on your prototypes (Figma, vibe-coded, etc.).
Read the book free here: dontimitate.dev/normalui
Let me know what you think!
I'm making my book Normal UI free to read online! The book is the result of decades of UX work and thousands of usability tests.
It's a simple technique that any designer or developer can learn that improves the usability of software applications. 👇
AI can’t replace humans period. You need humans to educate. But I think computers and AI can be an engaging canvas for good instructors to build learning experiences around.
Having students sit and just “learn” from chatting with AI is foolish though.
An AI computer chip with a graduation cap. Next to it says "The Future of Self-Paced Online Education by Tony Alicea".
What is the future of self-paced education in the age of AI? I've been experimenting with designing LLM-based learning experiences from the ground up, and wrote up my thinking on what I'm calling a Learning Surface.🌊