23 hours ago
AI: Promise & Peril
The big AI companies are simultaneously touting the promise and peril of their wares. So… which is it?
## So Good It’s Scary
In the last week, two of the leading AI companies announced the availability of new AI models that were so good at coding that they were basically afraid to release these models to the general public. It started with a press release from Anthropic announcing a new model named Claude Mythos. But unlike previous releases, Anthropic claimed that this one was so good at finding software vulnerabilities that they couldn’t, in good conscience, make it available to the public… yet. They instead formed Project Glasswing, which would make this powerful tool available to the good guys first. The idea is that the makers of **widely used** software could use the tool to find and _fix_ the bugs – because surely the bad guys would use Mythos to find and _exploit_ these bugs for nefarious purposes, if allowed to do so.
As is often the case these days with Anthropic and OpenAI (arguably the two top AI companies in the world), when one company puts out a major release, the other quickly follows suit. And so it was that OpenAI released ChatGPT 5.4 Cyber. The press release didn’t mention Mythos or Anthropic by name, but it was obviously a bit of a dig at Project Glasswing. And yet, it was basically taking the same position: this is so good that we’re going to control its release to keep it from being (ab)used by the bad guys.
This kind of alarmist messaging has been going on since the debut of ChatGPT 3.5, which kicked off this modern generative AI (genAI) race in late 2022. There was the famous Statement on AI Risk in mid-2023 and many offhand soundbite-worthy remarks from tech leaders, including Elon Musk saying (years ago) that AI will be more dangerous than nuclear weapons.
## Hype & Reality
So, which is it? Is genAI really that dangerous? Will it replace all human jobs? Will it enable cyber bad guys to hold the world hostage? Are these AI chatbots going to become sentient and take over the world? The answer to all three is: probably not. But genAI is already being used to justify layoffs, cybercriminals are using AI to craft much better scams, and certainly many people believe that AI chatbots are sentient – to the point of even falling in love with them – or worse. (You should watch the movie _Her_.)
There’s no doubt that this technology is highly disruptive, but it’s absolutely _not_ sentient. And like many disruptive technologies, it will replace some jobs, create others, and in most cases will just make people more productive in their current jobs. But we also need to realize that all this talk of danger and power, true or not, serves to promote the whole industry, garnering new users and attracting investment. It’s worth noting that both OpenAI and Anthropic are hoping to IPO in 2026.
## AI Superpower: Coding
However, there is one particular area where genAI is supremely well-suited: **coding** – as in, writing, reading, changing and (yes) exploiting bugs in software. Modern AI chatbots, or Large Language Models (LLMs), are almost tailor-made for software because, unlike most _spoken_ languages, computer languages are unambiguous, tightly structured, and very well defined. There’s also a _massive amount_ of example code out there to learn from. Granted, not all of that code is _good_ code, but in the vast majority of cases, it’s _working_ code. Furthermore, you can test software to verify that you got it right – that is, that it actually works. That includes code that exploits vulnerabilities. These genAI tools can grade their own work and tweak it until it works perfectly.
I’ve written software for well over four decades and I’m here to tell you: the current AI models are astonishingly good at writing and analyzing software. This has caused no end of consternation among my colleagues (this is a great article that illustrates the point). I actually have no trouble believing that Mythos and ChatGPT Cyber _could_ be as good as their owners claim. But here’s the key point: even if they’re not that good yet, _they will be_ – and it won’t take long. I don’t mean years, either – we’re talking months. These tools are improving at a _remarkable_ pace. One reason for this is that the AI companies are _using_ AI to improve their products!
And so, despite the hype, I actually support the controlled release of these new AI tools – it’s rational and smart. These tools will also be used, feverishly, to improve the security of our existing software and the tools used to detect and prevent attacks. The real shift isn’t that attackers can do new things – it’s that they can do the same things at much greater scale and with much less technical skill. We can hope that when the dust settles, these tools will benefit the creators more than the attackers, but we should be prepared for the reverse. Bruce Schneier has a short, well-written write-up on all of this that’s worth a read.
## What Should I Do?
First of all, don’t freak out. AI will be used for good and ill alike. It will be disruptive. But it’s not going to doom our species – at least not the type of AI we’re talking about here. However, the next 3 to 12 months is going to be a bumpy ride. All software has bugs, many of which are vulnerable to attack over the internet. These bugs exist right now – but the number and skill of the ‘bad guys’ limits how many can be found and exploited successfully. GenAI is going to change that. We need to get old, unsupported (or practically unsupportable) devices off the internet, now. And we need to fix and update whatever is left ASAP. I’m talking both about individuals like you and me, but also critical infrastructure companies, financial institutions, and government agencies of all kinds.
To protect ourselves, we need to keep doing all the things we’ve already been doing – but **more urgently**. I’ve written articles on all of these already:
1. Reduce your attack surface
2. Delete online data where you can
3. Backup your important data
4. Avoid “agentic AI”, at least until it’s safer
This next recommendation may seem counterintuitive… but you should absolutely _use_ AI – so you can familiarize yourself with what it can – and can’t – do. I would play with the free versions of ChatGPT and Claude. (Google’s Gemini is good, too, but I find it hard to recommend Google for privacy reasons.) You don’t have to install the app – you can just use it in a web browser. But also try privacy-respecting chatbots like Proton’s Lumo (or others). These products are improving constantly, so I would make an effort to try new versions as they come out every few months.
I may write a whole article on this topic…
#### Need practical security tips?
Sign up to receive Carey's favorite security tips + the first chapter of his book, _Firewalls Don't Stop Dragons_.
Don't get caught with your drawbridge down!
**Get started**
#AI: Promise & Peril
https://firewallsdontstopdragons.com/ai-promise-peril/
#Mythos #Anthropic #cybersecuritiy #OpenAI #ClaudeMythos #ProjectGlasswing #Glasswing
1
1
0
0