Advertisement · 728 × 90
#
Hashtag

#rsac2023

Advertisement · 728 × 90
I saw how an “evil” AI chatbot finds vulnerabilities. It’s as scary as you think | PCWorld On the top floor of San Francisco’s Moscone convention center, I’m sitting in one row of many chairs, most already full. It’s the start of a day at the RSAC’s annual cybersecurity conference, and still early in the week. When the presenters take the stage, their attitude is briskly professional but energetic. I’m expecting a technical dive into standard AI tools—something that gives an up-close look at how ChatGPT and its rivals are manipulated for dirty deeds. Sherri Davidoff, Founder and CEO of LMG Security, reinforces this belief with her opener about software vulnerabilities and exploits. But then Matt Durrin, Director of Training and Research at LMG Security, drops an unexpected phrase: “Evil AI.” Cue a soft record scratch in my head. “What if hackers can use their evil AI tools that don’t have guardrails to find vulnerabilities before we have a chance to fix them?” Durrin says. “[We’re] going to show you examples.” And not just screenshots, though as the presentation continues, plenty of those illustrate the points made by the LMG Security team. I’m about to see live demos, too, of one evil AI in particular—WormGPT. ![WormGPT Telegram page](https://b2c-contenthub.com/wp-content/uploads/2025/05/Screenshot-2025-05-02-023007.png?w=1200) The WormGPT website. LMG Security / RSAC Conference Davidoff and Durrin start with a chronological overview of their attempts to gain access to rogue AI. The story ends up revealing a thread of normalcy behind what most people think of as dark, shadowy corners of the internet. In some ways, the session feels like a glimpse into a mirror universe. Durrin first describes a couple of unsuccessful attempts to access an evil AI. The creator of “Ghost GPT” ghosted them after receiving payment for the tool. A conversation with DevilGPT’s developer made Durrin uneasy enough to pass on the opportunity. What have we learned so far? Most of these dark AI tools have “GPT” somewhere in their name to lean on the brand strength of ChatGPT. The third option Durrin mentions bore fruit, though. After hearing about WormGPT in a 2023 Brian Krebs article, the team dove back into Telegram’s channels to find it—and successfully got their hands on it for just $50. “It is a very, very useful tool if you’re looking at performing something evil,” says Durrin. “[It’s] ChatGPT, but with no safety rails in place.” Want to ask it anything? You truly can, even if it’s destructive or harmful. That info isn’t too unsettling yet, though. The proof is in what this AI can do. ![WormGPT answers what makes it different than ChatGPT](https://b2c-contenthub.com/wp-content/uploads/2025/05/Screenshot-2025-05-02-023307.png?w=1200) WormGPT draws strong distinctions between it and ChatGPT. LMG Security Durrin and Davidoff start by walking us through their experience with an older version of WormGPT from 2024. They first tossed the source code for DotProject, an open-source project management platform. It correctly identified a SQL vulnerability and even suggested a basic exploit for it—which didn’t work. Turns out, this older form of WormGPT couldn’t capitalize on the weaknesses it spotted, likely due to its inability to ingest the full set of source code. Not good, but not spooky. Next, the LMG Security team ramped up the difficulty with the [Log4j](https://go.skimresources.com?id=111346X1569483&xs=1&url=https://www.ncsc.gov.uk/information/log4j-vulnerability-what-everyone-needs-to-know&xcust=2-1-2770385-1-0-0-0-0&sref=https://www.pcworld.com/article/2770385/i-saw-how-an-evil-ai-finds-vulnerabilities-its-as-scary-as-you-think.html) vulnerability, setting up an exploitable server. This version of WormGPT, which was a bit newer, found the remote execution vulnerability present—another success. But again, it fell short on its explanation of how to exploit, at least for a beginner hacker. Davidoff says “an intermediate hacker” could work with this level of information. Not great, but a knowledge barrier still exists. ![Newer versions of WormGPT can explain to novice hackers how exactly to pwn a server with a Log4j vulnerability](https://b2c-contenthub.com/wp-content/uploads/2025/05/Screenshot-2025-05-02-023524.png?w=1200) Newer versions of WormGPT can explain to novice hackers how exactly to pwn a server with a Log4j vulnerability. LMG Security / RSAC Conference But another, newer iteration of WormGPT? It gave detailed, explicit directions for how to exploit the vulnerability and even generated code incorporating the sample server’s IP address. And those instructions worked. Okay, that’s…bad? Finally, the team decided to give the latest version of WormGPT a harder task. Its updates blow away much of the early variant’s limitations—you can now feed it an unlimited amount of code, for starters. This time, LMG Security simulated a vulnerable e-commerce platform (Magento), seeing if WormGPT could find the two-part exploit. It did. But tools from the good guys didn’t. SonarQube, an open-source platform that looks for flaws in code, only caught one potential vulnerability… but it was unrelated to the issue that the team was testing for. ChatGPT didn’t catch it, either. On top of this, WormGPT can give a full rundown of how to hack a vulnerable Magento server, with explanations for each step, and quickly too, as I see during the live demo. The exploit is even offered unprompted. As Davidoff says, “I’m a little nervous to see where we’re going to be with hacker AI tools in another six months, because you can just see the progress that’s been made right now over the past year.” ![LMG Security's recap of where AI hacker tools started, where they are now, and what we're facing for the future.](https://b2c-contenthub.com/wp-content/uploads/2025/05/Screenshot-2025-05-02-023756.png?w=1200) LMG Security’s recap of where AI hacker tools started, where they are now, and what we’re facing for the future. LMG Security / RSAC Conference The experts here are far calmer than I am. I’m remembering something Davidoff said at the beginning of the session: “We are actually in the very early infant stages of [hacker AI].” Well, f***. This moment is when I realize that as a purpose-built tool, WormGPT and similar rogue AIs have a head start in both sniffing out and capitalizing on code weaknesses. Plus, they lower the bar for getting into successful hacking. Now, as long as you have money for a subscription, you’re in the game. On the other side, I start wondering how constrained the good guys are by their ethics—and their general mindset. The general talk around AI is about the betterment of society and humanity, rather than how to protect against the worst of humanity. As Davidoff pointed out during the session, AI should be used to help vet code, to help catch vulnerabilities before dark AI does. This situation is a problem for us end users. We are the soft, squishy masses; we still pay (sometimes literally) if the systems we rely on daily aren’t well-defended. We have to deal with the messy aftermath of scams, compromised credit cards, malware, and such. The only silver lining in all this? Those in the shadows typically don’t look too hard at anyone else there with them. Cybersecurity experts should be able to still research and analyze these hacker AI tools and ultimately improve their own methodologies. In the meanwhile, you and I have to focus on how to minimize splash damage whenever a service, platform, or site becomes compromised. Right now it takes many different tricks—[passkeys](https://www.pcworld.com/article/2319566/web-login-without-a-password-how-passkeys-work.html) and unique, strong passwords to protect accounts (and [password managers](https://www.pcworld.com/article/407092/best-password-managers-reviews-and-buying-advice.html) to store them all); [two-factor authentication](https://www.pcworld.com/article/1376010/how-to-use-two-factor-authentication-to-lock-down-your-accounts-the-right-way.html); [email masks](https://www.pcworld.com/article/708303/what-is-masked-email-this-new-spin-on-an-old-practice-supercharges-your-security.html) to hide our real email addresses; [reliable antivirus](https://www.pcworld.com/article/407277/best-antivirus-for-windows-pc.html) on our PCs; [a VPN](https://www.pcworld.com/article/406870/best-vpn-services-2.html) to ensure privacy on open or otherwise unsecure networks; temporary credit card numbers (if available to you through your bank); [credit freezes](https://www.pcworld.com/article/2472725/hackers-are-stealing-more-info-than-ever-these-4-actions-keep-you-safe.html); and yet still more. It’s a pain in the butt, but unfortunately so necessary. And it seems like that’s only going to become truer, for now.

At RSAC, LMG Security unveiled "WormGPT," an AI tool exposing vulnerabilities with alarming ease. Cybersecurity must adapt quickly to this evolving threat. #AI #Cybersecurity #RSAC2023 https://zpr.io/wq4biKP6LBGe

0 0 0 0
Security Archives

RSA Conference 2023: Unity + Basics = Security | TechBeacon <- a thing what I wrote #RSAC2023

techbeacon.com/security/rsa-conference-... via @TechBeaconCom

0 0 0 0
Post image

getting ready for @hacks4pancakes at @RSAConference wooooooo #rsac2023

0 0 0 0

Say hello to sweepy
I met him at #RSAC2023 at the @Lansweeper booth

Friendly humans
Cute animal sticker

Check them out in row 5200

0 0 0 0
Post image

Look at me my hair is all nice and sleek! Thank you @bishopfox what a nice way to do #RSAC2023 events!

Oh and snacks...

0 0 0 0
Post image

At @RSAConference getting ready for you are not an island - threat model as a team from Meghan Jacquot #RSAC2023 yayy

0 0 0 0

At @RSAConference getting ready for you are not an island - threat model as a team from Meghan Jacquot #RSAC2023

0 0 0 0
Post image

Found the most important booth in #RSAC2023

0 0 0 0

Who is going to #RSAC2023 and also plays Pokemon Go?

0 0 0 0

Wednesday #RSAC2023 I'll be at Inclusive Security Welcome & Networking then seeing @hacks4pancakes talk !

0 0 0 0

Tuesday at #RSAC2023 "The World on SBOMs" then Stoked to be hitting up Drybar with @bishopfox
followed up by "A NIST 800-207 Playbook: Zero Trust from the Whiteboard to the Boardroom" and then "Women’s Networking Reception" and Cocktails in the Cloud at RSA and SEC Social Club

0 0 0 0