If you want to help shape how the U.S. anticipates and governs advanced AI and make a future that's safer for everyone, I'd encourage you to take a look.
Apply by December 15. More info and application form available here:
fas.org/career/senio...
Posts by Ollie Stephenson
I’m hiring a Senior Manager for AI Safety & Security Policy at FAS.
You’ll help turn technical insights about frontier AI risks into real policy outcomes—spotting windows for impact, shaping proposals decision-makers can use, and working directly with researchers + government.
Want to learn about how AI is being integrated into military decision-making and connect with experts working on this issue?
Register for our reception on November 19th, in partnership with the @futureoflife.org
🔗 luma.com/2zkutj53
Anthropic says its AI won't help you build a nuclear weapon. Will it work? And can a chatbot even help build a nuke?
AI-powered nukes might not be coming next week, but "we don’t know where they’ll be in five years time … and it’s worth being prudent about that fact," @technolliegist.bsky.social tells @wired.com
Going to #Abundance2025? Say hi! I’ll be there with plenty of my @scientistsorg.bsky.social colleagues. Come to me with any questions about artificial intelligence policy, and if you’re curious about clean energy, R&D innovation, or anything else, I’ll direct you to the geniuses I work with.
AI doesn’t live in the cloud. It runs on land, water, and electricity. We're challenging the idea that AI is clean or green.
Vote now for "Who Pays for AI?" 🗳️ participate.sxsw.com/flow/sxsw/sx...
11/n At FAS we’ll keep working with scientists & policymakers to craft AI policy that serves everyone.
10/n 🔎 Bottom Line: To reap AI’s benefits we must trust it—we need more research, careful adoption & strong guardrails for high‑risk uses. The plan has bright spots but backslides on bias & climate and collides with deep staffing/funding cuts in government.
9/n Also disappointing: deleting climate‑change references. AI uses a lot of energy and we can’t manage what we don’t measure. Our AI & Energy Policy Sprint shows how to track AI’s footprint and use AI to fight climate change: fas.org/accelerator/...
8/n ❌ The Ugly:
AI bias is real & measurable. Yet the plan tells NIST to drop “diversity, equity & inclusion” from its AI Risk Mgmt Framework and requires federal models be “free from ideological bias.” Lots depends on implementation but this is hiding real problems.
7/n Without national regs, state experiments are how we learn what responsible AI looks like. A regulatory Wild West won’t build public trust.
6/n ⚠️ The Bad
Last month the Senate stripped a clause from OBBBA that would have restricted state AI rules. The plan tries again to block state guardrails even as Congress sets no federal standard.
5/n ➡️ Focused Research Organizations (FROs): They tackle narrow, high‑impact problems that are a poor fit for startups. FAS first championed FROs in 2020, and we think this is their first federal embrace. We've publish a list of promising FRO ideas here: fas.org/initiative/f...
4/n ➡️ Security measures: Steps on cybersecurity, biosecurity, secure‑by‑design AI & incident response aim to stop harms before they freeze innovation.
3/n ➡️ Broad R&D agenda: Beyond interpretability, the plan backs research on robustness, controllability, new AI paradigms & an evaluation ecosystem.
2/n 🚀 The Good
➡️ Interpretability: We need to see inside AI's black box. With FAS AI Fellow Matteo Pistillo, we've drafted a federal roadmap to advance AI interpretability: fas.org/publication/...
1/n When the Trump admin began drafting its AI Action Plan, we at the Federation of American Scientists (@scientistsorg.bsky.social) offered ideas to advance innovation, maintain safety and security, and support government institutions. Now the plan is live, here’s my take:
Cover image for the event showing a robot hand hovering over a nuclear button.
☢️ Safeguarding Nuclear Command and Control in the Age of AI ☢️
I’ll be speaking Monday at 12pm ET on how AI might impact nuclear risks. See registration link below if you'd like to join!
🚨 Hiring: AI & Emerging Tech Manager @scientistsorg.bsky.social 🚨
Shape U.S. #AI policy—drive AI equity work, build S&T talent pipelines, tackle AI-safety & energy projects.
💼 $70k–$87.5k | Hybrid DC (2-3 days in office).
Apply soon, ideally by May 5! → fas.org/career/ai-an...
Great opportunity to develop concrete policy ideas around AI, energy, and environment!
Text saying: "“What DeepSeek has really done is capture public attention in a way that I haven’t really seen since maybe ChatGPT,” said Oliver Stephenson, the associate director for AI and emerging tech policy at the Federation of American Scientists. “That really boils through into how policymakers are paying attention, and that just shifts the entire ecosystem of Washington, D.C., and policymakers around the world to really focus again on this as a thing that they need to be paying attention to.”"
As world leaders meet in Paris, I spoke with @politico.com about DeepSeek and its impact on AI policy discussions.
🚨 Within the next 60 days (now much less), the Trump Administration will review OMB Guidance M-24-10 & M-24-18, which lay out how the federal government should use, acquire, and manage AI.
How should we manage AI's growing resource consumption, and use AI to promote clean energy? @scientistsorg.bsky.social wants to hear your ideas! Find out more below.
ATTENTION NUKE NERDS: We’re hosting a one-week, in-person OSINT bootcamp to teach a new generation of open-source nuke investigators. If you’re an early- to mid-career nuclear weapons analyst, this bootcamp is calling for you.
Apply today ▶️ fas.org/osint-bootcamp-2025/
“What we're seeing is an impressive technical breakthrough built on top of Nvidia's product that gets better as you use more of Nvidia's product...That does not seem like a situation in which you're going to see less demand for Nvidia's product.”
time.com/7211646/is-d... @scientistsorg.bsky.social