One point I want to stress from my @csetgeorgetown.bsky.social colleagues' new report today on PLA AI procurement (cset.georgetown.edu/publication/...): Internal security is #1. Read about the AI systems China is buying to surveil its own population and to track its soldiers' activity online.
Posts by Emmy Probasco
OK, counterpoint alert: I've just spent a profitable 20 minutes chatting with GPT-4.5 in Catalan, where it has corrected my errors, explained things and gave me a quiz. It's certainly not "PhD-level" but it was *great*. It's the kind of use case these things are really for.
Dario Amodei has an important op-ed in the NYT today about the proposed 10-year moratorium on states regulating AI. The moratorium sounds reasonable at first - avoiding a patchwork of state laws - but it's actually quite dangerous given how fast AI is moving.
www.nytimes.com/2025/06/05/o... 1/6
Just got this from my mom. My work here is done.
Something to consider when trying to use an LLM to summarize intel reports?
For those looking to use LLMs to summarize scientific (in this case medical) papers: "LLM summaries were twice as likely to contain generalized conclusions compared to the original abstracts, indicating an algorithmic overgeneralization tendency."
pmc.ncbi.nlm.nih.gov/articles/PMC...
There are lots of shades of gray between a coach and enforcer, I acknowledge, but regardless we must ask: Should AI override humans in combat? Could it prevent war crimes like My Lai — or create new dangers?
Outside the U.S., enforcer-style AI may come sooner. China and Russia are exploring AI to make up for perceived weaknesses in junior leadership.
Alternatively, a leader could deploy an AI as an enforcer in a way that might prevent or stop the next My Lai. But by using AI as an enforcer, they are diminishing the agency of the operators in the field.
A leader could deploy AI as a coach to try to prevent a violation of the laws of war, but that coach could be ignored and the leader may be criticized for not taking more decisive action.
Choosing how to deploy AI: as a tool, coach, or enforcer is fundamentally a leadership issue. And leaders should think about the ethical dimensions of their choices. For example:
But AI could also be deployed as an 🚓 enforcer: preventing or overriding human decisions when they run contrary to some specified goal.
AI can also be thought of as a 🏋️ coach. Especially when it’s able to fuse information or interact with natural language to help an operator achieve a goal (for an easy comparison, think of something like an advanced nutrition or diet app on your phone).
The thought experiment makes us consider how AI can be deployed. Most of the time, AI is thought of as a 🛠tool. A relatively limited assistant that can, for example, recognize objects and bring them to the attention of an operator.
Imagine a "Thompson drone" — flying overhead, spotting civilians, and warning troops in real time using voice alerts or text messages. This idea is an emerging technological possibility.
In 1968, U.S. soldiers massacred hundreds of civilians in My Lai, Vietnam. The killing was interrupted by helicopter pilot Hugh Thompson Jr.
If pilots are to be replaced by AI, then we should ask, could AI on a drone do the same in a future war?
Wrote a thing on value-based choices when deploying AI. It's based on a military situation (a sci-fi version of the My Lai massacre) but I can't help but think this is a problem lots of leaders will face in more subtle, and less stressful, ways.
As military AI evolves, ethical dilemmas grow. Should drones merely inform soldiers, coach decisions, or enforce rules of war directly?
Read the new piece by Minji Jang & CSET's @emmyprobasco.bsky.social, out today in @warontherocks.bsky.social 👇
Bradford G. Smith’s case highlights key issues about the prospect that brain implants and AI will one day merge. www.technologyreview.com/2025/05/07/1...
Erwin Chemerinsky and @tribelaw.bsky.social: We Should All Be Very, Very Afraid
www.nytimes.com/2025/04/09/o...
I'm super excited to see our #CSET report on **AI-enabled military decision support systems** being released today!
Great work by @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell!
⭐️New Report⭐️
Using AI to make military decisions?
CSET’s @emmyprobasco.bsky.social, @hlntnr.bsky.social, Matthew Burtell, and @timrudner.bsky.social analyze the advantages and risks of AI for military decisionmaking. cset.georgetown.edu/publication/...
You can see their launch market analysis here. They will have another paper on the advanced tech market very soon! cset.georgetown.edu/publication/...
Commercial space has come a long way, thanks in large part to early government investments (DOD and NASA) as well as federal action to promote innovation and competition. All credit to Michael O'Conner and @kathleencurlee.bsky.social for this analysis: cset.georgetown.edu/publication/...
With all this new data from remote sensing, new space data analysis companies are also starting.
And new companies are also seizing the opportunities presented by new sensing modalities:
It also helps that camera tech is improving for these remote sensing providers (thanks to government R&D!)
It’s not just launch that’s different: take a look at how many remote sensing companies have popped up since 2010. Helps that these guys have cheaper launch options: