✨New Report✨
Recommendations on AI are everywhere, practical implementation guidance is not.
Our latest report synthesizes 1,200+ resources into an actionable, structured guide to adopting AI systems.
Read more ⬇️
cset.georgetown.edu/publication/...
Posts by Vikram Venkatram
For more from CSET on this administration's efforts on preemption, check out these op-eds by the same authors!
cset.georgetown.edu/article/the-...
cset.georgetown.edu/article/stat...
Notably, there are some key overlaps between this document and draft AI legislation from Senator Marsha Blackburn's office, which was released just two days before the framework.
Given the administration's history of a light-touch regulatory stance on AI, the framework could represent a first step in negotiations, though the provisions on preemption are likely to remain controversial.
While the framework itself is non-binding, it is a call to action for federal AI legislation to address topics like child safety, intellectual property rights, and as mentioned previously, preemption of state laws.
This document calls on Congress to take legislative action in line with the administration's AI policy goals.
It's the next step in a series of efforts by the White House to preempt state-level AI laws, including a Dec 2025 Executive Order (which called for a unified federal policy.)
The White House just put out a new National Policy Framework for AI.
Check out this @csetgeorgetown.bsky.social piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself on its key components and how it fits into the administration's priorities!
cset.georgetown.edu/article/unpa...
This is an excellent piece discussing AI's impact on biorisk, offering a great distillation of key topics. Definitely worth a read!
(And I'm not surprised it's great, given that @csetgeorgetown.bsky.social's own @stephbatalis.bsky.social is quoted!)
www.transformernews.ai/p/ai-biorisk...
Furthermore, state-level AI laws can themselves support and enhance innovation (as we have argued previously)!
thehill.com/opinion/tech...
The new order could similarly face major challenges: risking legal backlash, bipartisan resistance, and public distrust. Each of these factors could make AI innovation harder.
This order was the culmination of multiple attempts to impose a moratorium on state-level AI regulation. Prior efforts have faced surprising hurdles.
Late last year, the Trump administration put out an Executive Order aiming to preempt states' ability to regulate AI systems and set the stage to challenge the constitutionality of state AI laws.
Excited to share a new op-ed by @minanrn.bsky.social, @jessicaji.bsky.social, and myself for the National Interest!
The administration's new AI Executive Order, aiming to suppress state-level AI regulation, risks undermining the innovation it seeks to advance.
nationalinterest.org/blog/techlan...
With the right mix of evidence-based tools working together, we can create a flexible, layered, and effective safety net for biosecurity governance.
It would take too long to wait for the perfect policy before taking action.
Instead, we should implement good safeguards as we go, being careful to avoid allowing the pursuit of perfect interventions to prevent the adoption of well-designed, practical ones.
In it, we argue that the right way forward for biosecurity will involve using multiple tools from our toolbox of policy levers in tandem with one another.
Each biosecurity intervention targets a specific risk, and they're often most effective when narrowly scoped.
With all sorts of new biotechnologies expanding what's possible in the life sciences, what's the best approach for the next era of biosecurity?
Check out this new op-ed by @stephbatalis.bsky.social and myself for the 80th anniversary of @thebulletin.org!
thebulletin.org/premium/2025...
After analyzing these proposals, we argue:
1. Policymakers can use this approach to understand disagreements and shared views of proposal creators more precisely.
2. They can take action in an uncertain and rapidly changing environment by addressing common assumptions across governance proposals.
Policymakers can use these assumptions, some unique and some shared, to better understand what's possible and more effectively build AI governance infrastructure.
To show this in action, our report analyzes five AI governance proposals, from different kinds of organizations, as case studies.
We suggest breaking down AI governance proposals into their component parts. What do they aim to govern, and why? Who should do the work, and how?
Answering these questions will surface the foundational assumptions that make the proposals tick.
With AI tech continuing to develop, many relevant organizations have written proposals about how to govern AI.
With so many out there, how should people, from policymakers to other interested parties, understand and evaluate them?
This report proposes an analytical method to achieve that.
Check out my new @csetgeorgetown.bsky.social report, written alongside @minanrn.bsky.social, @jessicaji.bsky.social, and Ngor Luong!
cset.georgetown.edu/publication/...
Identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.
The plan also promotes and emphasizes the importance of scientific, including biological, datasets- in line with @csetgeorgetown.bsky.social recommendations for the plan, which you can read here: cset.georgetown.edu/publication/..., and with other CSET work: cset.georgetown.edu/publication/....
Focusing on bio, one provision is a federal funding requirement for DNA synthesis screening- a useful tool in the toolbox for limiting biological risk.
Check out @stephbatalis.bsky.social and I's piece breaking down the kind of decisions screeners have to make: thebulletin.org/2025/04/how-...
More on the recent AI Action Plan! @csetgeorgetown.bsky.social work is very relevant.
Ultimately, though, a chilling effect on state-driven AI legislation could severely harm innovation by reducing foundational AI governance infrastructure.
The Action Plan's implementation and approach remain to be seen, but it should be careful not to nip useful state regulation in the bud.
The plan does clarify that restrictions shouldn't interfere with prudent state laws that don't harm innovation.
And it's true that a complex thicket of onerous state laws governing AI could make it harder for AI companies to comply, harming innovation.
States are better-positioned to pass these laws than the federal government in the current environment.
They can also serve as a sandbox for experimentation and debate, allowing for innovation in governance approaches. The best governance approaches can inspire other states to follow suit.
State laws provide a critical avenue for building governance infrastructure: things like workforce capacity, information-sharing regimes, standardized protocols, incident reporting, etc.
These help provide clarity for companies and are crucial for innovation.
A recent @thehill.com piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself introduces the topic of governance infrastructure.
It discusses the recent proposed ban on state AI regulation-which would have gone much further and, thankfully, did not pass.
thehill.com/opinion/tech...