In our latest op-ed for Newsweek, @belongnomics.com
and I argue that framing AI governance in opposition to innovation is an unproductive exercise. Instead, we should be asking what we are innovating for -- and for whom? www.newsweek.com/stop-asking-...
Posts by Mina Narayanan
Stop asking whether to regulate AI. Start asking what it's for.
New op-ed in Newsweek with @minanrn.bsky.social: innovation vs. regulation is the wrong frame. The real question is what we're innovating for — and for whom.
www.newsweek.com/stop-asking-...
"Well-designed governance does not suppress innovation. Instead, it shapes the direction of innovation in socially beneficial ways..."
@belongnomics.com and @minanrn.bsky.social reframe the discussion on AI innovation and governance. More in @newsweek.com below:
www.newsweek.com/stop-asking-...
The White House just put out a new National Policy Framework for AI.
Check out this @csetgeorgetown.bsky.social piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself on its key components and how it fits into the administration's priorities!
cset.georgetown.edu/article/unpa...
The WH Nat'l AI Policy Framework extends efforts to preempt state laws while pushing federal action on topics including child safety. In our new @csetgeorgetown.bsky.social piece, we break down key features & potential impact cset.georgetown.edu/article/unpa...
By adopting our analytic approach, U.S. policymakers + researchers can move away from rhetorical debates about AI governance & better prepare the United States for a range of possible AI futures cset.georgetown.edu/publication/...
Our work also demonstrates that policymakers & researchers alike can leverage assumptions to more precisely understand disagreements & shared views among stakeholders
Our case study demonstrates that policymakers can take action in an uncertain & rapidly changing environment by addressing common assumptions across proposals
We apply these questions to 5 US AI governance proposals from academia, industry, civil society, & the state & federal government & find that most proposals view AI-enabling talent & AI processes/frameworks as important enablers of AI governance
Assumptions that are shared across proposals are effectively enablers of the success of multiple proposals, whereas unique assumptions may indicate different perspectives or areas where consensus-building may be challenging
Our approach involves deriving unique & shared assumptions across proposals by answering 3 questions:
Policymakers must move beyond rhetoric to govern AI. 🏛️ A new @csetgeorgetown.bsky.social report from jessicaji.bsky.social, @vikramvenkatram.bsky.social, Ngor Luong, & myself presents an approach to help policymakers analyze assumptions about AI cset.georgetown.edu/publication/... 🧵
Check out my new @csetgeorgetown.bsky.social report, written alongside @minanrn.bsky.social, @jessicaji.bsky.social, and Ngor Luong!
cset.georgetown.edu/publication/...
Identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.
What’s taken shape in the four months since the release of the AI Action Plan? 🧵👇
In the latest @csetgeorgetown.bsky.social ETO AGORA roundup, four CSET experts dig into the Plan’s policy impact and what’s next for AI governance. eto.tech/blog/agora-a...
In other words, Congress is still in the early days of governing AI but so far seems more focused on understanding and harnessing AI’s potential than addressing its downsides. Make sure to take a deeper dive into our analysis here 🧵6/6 eto.tech/blog/ai-laws...
Fewer legislative docs directly tackle risks or undesirable consequences from AI (such as harm to infrastructure) than propose strategies such as government support, convening, or institution-building 🧵5/6
Very few enactments leverage performance requirements, pilots, new institutions, or other governance strategies that place concrete requirements on AI systems or represent investments in maturing or scaling up AI capabilities 🧵4/6
Most of Congress’s 147 enactments focus on commissioning studies of AI systems, assessing their impacts, providing support for AI-related activities, convening stakeholders, & developing additional AI-related governance docs 🧵3/6
We find that Congress has enacted many AI-related laws & provisions which are focused more on laying the groundwork to harness AI’s potential – often in nat'l sec contexts – than placing concrete demands on AI or directly tackling their specific, undesirable consequences 🧵2/6
Check out the second @csetgeorgetown.bsky.social @emergingtechobs.bsky.social blog from @sonali-sr.bsky.social and myself where we explore the strategies, risks, and harms addressed by AI-related laws enacted by Congress between Jan 2020 and March 2025 🧵1/6 eto.tech/blog/ai-laws...
Shared some thoughts on the AI Action Plan's recs around shaping state-level AI activity last week -- essentially, the plan's attempt to pressure states to abandon AI restrictions risks hurting U.S. national security www.defenseone.com/technology/2...
Yesterday's new AI Action Plan has a lot worth discussing!
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
Stay tuned for the second blog, which examines the governance strategies, risk-related concepts, and harms covered by this legislation! 🧵3/3
We find that, contrary to conventional wisdom, Congress has enacted many AI-related laws and provisions — most of which apply to military and public safety contexts 🧵2/3
Check out the first blog in a 2 part series from @sonali-sr.bsky.social and myself where we use data from @csetgeorgetown.bsky.social @emergingtechobs.bsky.social AGORA to explore ✨AI-related legislation that was enacted by Congress between January 2020 and March 2025✨
eto.tech/blog/ai-laws... 🧵1/3
Check out the latest AGORA roundup from @emergingtechobs.bsky.social , which highlights some overlooked AI provisions in the Big Beautiful Bill!
The 10 yr moratorium on state AI laws will hurt U.S. nat'l security & innovation if enacted. In our piece in @thehill.com , @jessicaji.bsky.social , @vikramvenkatram.bsky.social , & I argue that states support the very infrastructure needed for a vibrant U.S. AI ecosystem
thehill.com/opinion/tech...
Banning state-level AI regulation is a bad idea!
One crucial reason is that states play a critical role in building AI governance infrastructure.
Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!
thehill.com/opinion/tech...
Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?
There's no perfect method, but safety evaluations are the best tool we have.
That said, different evals answer different questions about a model!