This summer I will take over as Vice Dean in charge of Wharton's MBA for Executives program (WEMBA)
I agreed to the role because there's no better place to address the challenges - and opportunities - that AI and other developments pose for jobs, business broadly, and education.
Posts by Kevin Werbach
With input from a diverse group of experts, the Stablecoin Toolkit provides a foundation to understand what stablecoins are, how they function, and how they relate to other forms of money and payments. It offers guidance for policy-makers in evaluating the evolving sector.
whr.tn/stablecointo...
There are many reports on stablecoins from market participants and consultants. The Stablecoin Toolkit fills a gap by offering a balanced, academically-informed perspective, and tackling fundamental quesitons about the nature and distinctiveness of stablecoins.
With the passage of the GENIUS Act and other rules globally, stablecoin activity has skyrocketed. Yet the landscape of stablecoins in use today is much broader than any of them contemplates. And the stablecoin ecosystem is broader still.
We are delighed to announce the release of the Wharton Stablecoin Toolkit: Financial and Market Dimensions.
This report provides a comprehensive overview of the stablecoin world, including the business ecosystem, categories of approaches, and use cases.
whr.tn/stablecointo...
AI has scaled fraud to a global industrial level. On The Road to Accountable AI, ID.me CEO Blake Hall explains why old verification methods are obsolete, and why we need a secure, user-controlled digital identity layer.
Listen at apple.co/accountable, or visit accountableai.net
Registration is open for the first Accountable AI Research Conference, at The Wharton School in Philadelphia on Feb. 6, 2026.
ai-analytics.wharton.upenn.edu/wharton-acco...
If you're an AI governance practitioner or researcher, looking for insights on the future of responsible AI, please join us!
On The Road to Accountable AI, I spoke with Mitch Kapor @mkapor, legendary entrepreneur, digital policy visionary, and impact investor, about what we haven’t learned from past tech waves, how VCs can close societal gaps, and why responsible AI requires more than good intentions.
accountableai.net
Read "U.S.-China AI Cooperation Under Trump 2.0" by @kwerb.com: perryworldhouse.upenn.edu/news-and-ins...
On The Road to Accountable AI, I spoke with
@bradrcarson of @americans4ri, about why politics is inescapable for AI's future, the prospects for bipartisan agreement in the US, and how to navigate the challenges of AI policy.
Listen at apple.co/accountable, or your favorite podcast platform.
Oliver Patel, head of AI Governance at AstraZeneca, joins me this week on The Road to Accountable AI to share enterprise AI governance frameworks built from his experience.
Listen to the full episode at https:apple.co/accountable, or your favorite podcast platform.
Maybe we've been approaching AI ethics the wrong way?
On this week's episode of The Road to Accountable AI, Ravit Dotan discusses how she shifted her approach, and why AI governance practitioners should be more like chefs.
Listen now: apple.co/accountable or your favorite podcast platform.
Registration will open in a few weeks, after we have a chance to review the paper submissions.
We invite industry practitioners, regulators, an non-academic experts on AI accountability and related topics to join us at Wharton on February 6 for valuable interdisplinary conversations.
Reminder that Monday is the submission deadline for the Accountable AI Research Conference, at the Wharton School in Philadelphia on Feb. 6.
For researchers on AI governance, responsibility, safety, ethics, or policy, don't miss this opportunity!
ai-analytics.wharton.upenn.edu/wharton-acco...
Also, if Character AI is taking the position that every word generated by the chatbot is actually the company speaking, vs. just indirect output of a tool which they designed, that might open up greater responsibility for firms.
Not to mention that libel, slander, and defamation involve speech by definition!
“AI governance is a team sport.”
On Road to Accountable AI, Caroline Louveaux of Mastercard explains how trust, collaboration, and responsibility drive innovation in the AI era.
Listen now: apple.com/accountable
accountableai.net
Exceptional article by @kaiserkuo.bsky.social on how China's successful rise should force us to reconsider our assumptions about modernity.
I'm frustrated how few Americans who aren't China specialists are willing to entertain such questions.
www.theideasletter.org/essay/the-gr...
"AI governance can’t be top-down—it has to be networked.”
On the Road to Accountable AI, former Acting US Secretary of Commerce Cam Kerry explains why flexible oversight is essential for AI, and what AI policy can learn from the long battle over privacy regulation.
Listen now: accountableai.net
What does it take to turn the theoretical ideal of fair AI into something organizations can implement? This week on The Road to Accountable AI, I spoke with Derek Leben, Carnegie Mellon professor and author of the new book, AI Fairness.
Visit accountableai.net or your favorite podcast platform.
AI is spreading fast. Can governments keep up?
On today’s Road to Accountable AI podcat, Karine Perset, Acting Head of the OECD’s AI Division, explains how countries are working together to shape responsible AI.
Listen now:
accountableai.net or your favorite podcast platform.
🚨 Call for Papers! 🚨
The Wharton Accountable AI Lab is hosting the 1st Accountable AI Research Conference at @Wharton on Feb 6, 2026.
Focus: law, ethics, governance & policy shaping real-world AI practice.
Submissions due: Oct 27, 2025. Learn more:
ai-analytics.wharton.upenn.edu/wharton-acco...
A report by an MIT research center claiming that 95% of genAI deployments fail got a lot of attention this week, even possibly contributing to a drop in AI stock prices.
As I explain on LinkedIn, the report doesn't offer any real support for its conclusion.
www.linkedin.com/posts/kevinw...
I never thought about it before, but if truths are "self-evident," why do we need to declare that we hold that to be true?
Flashing red light for financial markets. With klaxons.
Crypto is a legit investment asset class. But if the thesis is financial engineering memecoins into traditional stock markets via SPACs, and the market is buying it, we're eventuially headed for a crackup.
www.ft.com/content/50f9...
poetsandquants.com/2025/07/24/a...
Amen.
We're doing a lot to bring AI into the curriculum at @wharton. I'm rethinking my own courses from scratch. But this is a deep question about the future of business education, where we could benefit from engagement with our peers.
The Chinese ride-hailing giant Didi has been offering female riders the option of female drivers around the world for soem time.
The more the US becomes like China, the harder it is for China to become like the US. That should concern us deeply.
The reality is that China has an ideology, and the US has an ideology. The argument that "American AI" should win is that our ideology is best for human flourishing. Claiming that our AI is "objective" and theirs is "ideologically biased" is just the mirror of their claims of "scientific socialism."
Interesting that the White House did a public comment process on the AI Action Plan, in which all the AI companies said their most important existential issue is copyright liability and...said exactly nothing about copyright.
Then again, probably better it didn't.