They work tirelessly to fill your feed with crap, make it feel inevitable and hopeless.
Posts by David Rice
This thread 💯. Not to mention they go out there and warn it could all go so badly, then profit off of the rhetoric. They get fired or retire, then just write books how we’re all screwed cause they built it so well. Meanwhile, LLMs buy up a quarter of all Super Bowl ads. SF covered in AI ads.
We talk a lot about keeping humans in the loop. Might also be worth keeping #HR in the loop.
peoplemanagingpeople.com/hr-strategy/...
A room full of HR leaders at Transform were asked who owns AI literacy in their org. The answers: L&D. IT. Everyone. And then, quietly — no one. That's a structural problem.
In the builder sessions the question was: how do we get organizations to adopt faster? In the HR sessions the question was: who is accountable when something goes wrong? Neither side has fully heard the other's question.
They allowed me to move between the builders and the CHROs. The gap between them is the most important story in enterprise AI right now.
I spent time at four conferences this spring, all heavily focused on AI, 2 more technical, 2 through the people lens. #HumanX and #Transform being the two most notable. 🧵
Enterprise governance frameworks were built for AI that advises. Agentic AI acts, transfers, authenticates, and calls back without human review at each step. The builders know governance is behind. They're deploying anyway. New piece on what that means.
peoplemanagingpeople.com/hr-operation...
If you're an HR leader and you left HumanX thinking AI governance is someone else's problem, it isn't. It never was. It's just finally obvious. But more disturbing, these conversations are happening in silos still cause if you're an HR leader, you probably weren't even here. More on that later tho.
Three days. A lot of frameworks. A lot of urgency. One thing that actually stuck: the companies moving thoughtfully are outpacing the ones moving fast. Speed plus clarity works. Speed plus chaos is just chaos with a demo.
Credo AI has catalogued 1,600 known AI risks. They have mitigations for 85% of them. Nobody asked about the other 15%.
The CEO of Zensai said he's never once seen a CHRO, CTO, and CFO in the same room together. Every major AI decision lives exactly at that intersection. Make of that what you will.
Best governance take of the day: only 40% of AI vendors are making it into production at Fortune 500 companies now. Last year was "let's try everything." This year is "can I actually trust this in front of my customers?" The vibe-coded startup era of enterprise sales is ending.
A founder told the room that 60% of the people in the workflows he automates are no longer needed after deployment. He was being honest more than cruel. The companies doing this well are redeploying that capacity into work that never got done before. The ones doing it badly are just cutting.
The CEO of Dataminr put it plainly: his platform is on the wall of the White House Situation Room. His position on AI autonomy? The machine builds the picture. The human pulls the trigger. He called it "autonomous intelligence vs. autonomous action." Write that one down for your PR campaigns.
Day 3 at #HumanX. Final day. Here's what's rattling around in my head. 🧵
Two quotes that on the surface feel a bit reactionary. But when you look at what’s going on with resumes, it’s a natural evolution because the hiring process is changing so much. Trust is now the product that you’re after when hiring
“The skills involved in #HR right now are drastically different than what it was 20 years ago. So it’s going to be about those who can adapt.”
“HR is gonna be focused on enterprise risk. That’s different than what most HR people have done traditionally, so they’re going to have to change quickly.”
By the end of the year, 1/4 candidates will be fake. That’s the start of a session called Trust No Resume. CEO of Talent Intelligence platform Phenom flagged 12k new fraud indicators in the last year. Wondering how reliable that many indicators can be.
We don’t keep humans in the loop because it’s efficient. We keep them there because accountability, judgment, and consequence all require a conscience.
Speed is not a good enough reason to remove the one thing AI doesn’t have.
One of the things I’ve heard at #HumanX is AI governing AI. And it makes sense because humans simply can’t keep up with what #AI does.
But I also think keeping humans in the process with AI is a values problem, not a speed problem.
Good to remember we’re more than what we do for money, whether punching a clock, building something revolutionary or reimagining work. We owe to ourselves to claim parts of our life for us to just live and as I’ve gotten older I’ve gotten better at recognizing when my fight is done for the day.
Summary of where early-stage AI money is going in 2026:
→ SF (obviously)
→ Founders with good origin stories and no lumbar support
→ Companies that can double in value before your coffee gets cold
→ Autonomous boats (yes, really)
We live in the future. It's unevenly distributed and overfunded it.
One panelist declared the "SaaSpocalypse" narrative is overdone.
Companies are reinventing themselves. Going AI-native. Scrapping everything and rebuilding.
So basically: your enterprise software vendor didn't fail. They needed to start over. Totally normal. Very reassuring.
The trait VCs actually want in founders, beyond the pitch deck:
Grit. Resilience. Never quitting.
Which tracks. Because if your valuation just doubled in 14 days and you still have negative gross margins, quitting would be the rational choice.
On valuations doubling in two weeks:
One fund invested at $10M. Two weeks later, same company: $25M.
This is called "price discovery."
In any other industry it's called "we should've read the term sheet more carefully."
Good news for the sartorially ambitious: Silicon Valley might be getting... fashionable?
The crowd voted mostly against it.
The hoodie industrial complex remains intact. Patagonia stock is safe.
Big theme: Founders need a "superhero origin story."
Not making this up. A VC literally asked: "What murdered your parents and made you Batman?"
The bar for Series A is apparently childhood trauma and tragedy.
Favorite quote of the session:
"If you're building a startup and you don't have back pain, you should go bigger."
Finally. A KPI I'm already nailing right now.
The real debate: should you back founders under 25 or over 35?
Under 25: AI-native, vibe-codes at 2am, no back pain.
Over 35: Knows what a P&L is.
One VC literally said young founders survive sleeping in 3-hour cycles to babysit their AI agents.
Sounds like a psychological condition.