(breaking from me) The Pentagon is discussing plans for generative AI companies to train military-specific versions of their models on classified data. It's a new development compared to models just answering classified questions. Details here: www.technologyreview.com/2026/03/17/1...
Posts by James O'Donnell
The comments suggest that generative AI is being added as a conversational chatbot layer to existing AI projects like Maven, which uses computer vision to analyze "big data" and select targets. Users previously needed to inspect and interpret data on the map. Now they can have a conversation instead
Humans would then be responsible for checking the results and recommendations. ChatGPT and Grok could, in theory, be the models used for this type of scenario in the future, as both companies recently reached agreements for their models to be used by the Pentagon in classified settings
A list of possible targets might be fed into gen AI. Then humans might ask the model to prioritize the targets while accounting for factors like where aircraft are currently located. The official described it as an example of how things might work but would not say whether it represents current use
We reported new details from a Defense official about the specific role chatbots may play in accelerating the Pentagon's search for targets, and how it all might connect to the military's other AI projects:
Homemade intelligence dashboards on Iran are spreading. But an abundance of information does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us www.technologyreview.com/2026/03/09/1...
we’ve essentially ended up back where we started: allowing the Pentagon to use OpenAI's tech for any lawful use www.technologyreview.com/2026/03/02/1...
AI content can dupe us, shape our beliefs even when we catch the lie, and erode societal trust in the process. The tools we were sold as a cure for this crisis are failing miserably
www.technologyreview.com/2026/02/02/1...
Neither Civitai nor a16z responded to requests for comment. Study led by @matthewdeverna.com and Shalmoli Ghosh. Full story in @technologyreview.com here www.technologyreview.com/2026/01/30/1...
Grok's debacle was about deepfakes made on platforms, but this is about the demand for tools that let users fine-tune them elsewhere. 86% of deepfake requests were for instruction files called LoRAs that coach mainstream AI models into generating content they weren't trained to produce.
Users requested tools to generate images of public figures like Charli D’Amelio or Gracie Abrams, often linking to their social media so their images could be grabbed. Some wanted to generate the individual’s entire body, accurately capture tattoos, or change hair color. They cost $0.50 to $5.
News comes from a new study looking at people’s requests for content on the site, called “bounties.” Most bounties asked for animated content—but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women.
Civitai—an online marketplace for buying and selling AI-generated content, backed by a16z—is letting users buy custom instruction files for generating celebrity deepfakes. Some were designed to make pornographic images the site says are banned.
Inside the marketplace powering bespoke AI deepfakes of real women 🧵
A scoop: DHS is using AI video generators from Google and Adobe to make content shared with the public. It comes as workers in tech have put pressure on their employers to denounce the agency's activities.
www.technologyreview.com/2026/01/29/1...
Securus can now fund this AI model with the fees inmates pay to make calls, after the company successfully lobbied for new regulations. www.technologyreview.com/2025/12/01/1...
The fight against child exploitation is entering an AI-versus-AI era. Use of detection models “ensures that investigative resources are focused on cases involving real victims, maximizing the program’s impact and safeguarding vulnerable individuals."
www.technologyreview.com/2025/09/26/1...
Two takeaways from my new story:
1) Deciding what counts as a routine tasks, for which AI can assist without requiring human judgment, is hard. 2) When judges make mistakes, they'll face less oversight, and it's a longer road to fix their errors.
www.technologyreview.com/2025/08/11/1...
In the White House's new AI action plan, there's just a single bullet point about the FTC, the agency most poised to serve as an AI watchdog protecting consumers. But what it says represents a huge escalation of Trump's attacks toward the agency.
www.technologyreview.com/2025/07/24/1...
Chatbots used to avoid giving medical advice without disclaimers. Now they're analyzing your mammograms and asking follow ups. Welcome to the new unchecked, unverified, and unaccountable world of AI models playing doctor. www.technologyreview.com/2025/07/21/1...
AI companies have teamed up with teachers unions to get more AI in the classroom, even as the evidence that it helps students is shaky. In @technologyreview.com
Right now this is hard to measure because of the lack of open source models that do music and speech. But as soon as we can measure it, I'll be jumping on it
@caseycrownhart.bsky.social and I spent months reporting together and found the common understanding of AI's energy appetite to be full of holes. As Big Tech aims to restructure our energy grids around the needs of AI, we dive as deep as one can go.
www.technologyreview.com/2025/05/20/1...
right - there's a demo in the story, but I haven't found an independent evaluation judging its accuracy in real settings. As I identify where it's being used I'll report what I find.
Police and federal agencies have found a new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.
www.technologyreview.com/2025/05/12/1...
I've been following the rise of AI music generators for a long time, which despite their millions of users and fans have not appeared in the mainstream as much as ChatGPT or image generators. I'm convinced that's soon changing (I'm also not convinced that's a good thing). ter.li/JOdonnell
A fascinating democracy experiment in Kentucky saw good participation. There's still a debate though: can a self-selecting group of residents ever really represent a city's ideas? Great to speak with @politicsprof.bsky.social @bethnoveck.bsky.social
www.technologyreview.com/2025/04/15/1...