New on Hacking the Cloud! Raajhesh Kannaa Chidambaram covers Daniel Grzelak's research on how AWS error messages can reveal publicly exposed resources, without needing access! This article covers how to use them for enumeration and detection.
hackingthe.cloud/aws/enumerat...
Posts by Nick Frichette
Researchers have been warning about this for years.
Compromise a developer laptop → steal tokens → pivot to cloud.
In many orgs that path ends with AWS admin in minutes.
thehackernews.com/2026/03/unc6...
Datadog Security Research continues to push the boundaries of modern cloud security—including AI security!
@siigil.bsky.social shares her finding on logging gaps affecting Copilot Studio, allowing adversaries to evade detection.
securitylabs.datadoghq.com/articles/cop...
Datadog 🤝 Okta: "The enhanced logic developed by Datadog’s own Security Research team during this collaboration has been contributed back to the public Okta Security Detection Catalog, ensuring that the broader security community benefits from this joint research"
sec.okta.com/articles/202...
"permitted a single ECS task role "read access to every secret in the account, including the production Redshift master credential.""
There is a lot going on with this (even if not all of it can be believed). Properly scoping IAM is critical!
www.bleepingcomputer.com/news/securit...
😬
I get the appeal of “human-in-the-loop” for AI safeguards. But humans have been getting socially engineered for millennia.
That’s not exactly a hard security boundary 😬
Sometimes I miss Jia Tan.
Hey wake up! New offensive AWS meta just dropped! Thanks to Daniel Grzelak, we now have an effective oracle for determining if resources are publicly exposed without leaving logs. (As an offsec person) LFG!!!
www.plerion.com/blog/dont-ex...
If anyone is interested, I built a framework to use Claude Code or Codex to act as a virtual DM for DND. State is stored on the filesystem and persists between sessions. I think Opus 4.6 is the ideal model for this but Codex works too.
github.com/Frichetten/D...
Professional communication
New on Hacking the Cloud! A look at how a familiar container escape pattern shows up in GCP Cloud Workstations. We trace a path from a container to service account.
If you’re using Cloud Workstations, this is a useful model to keep in mind.
hackingthe.cloud/gcp/exploita...
Just got my ticket to @fwdcloudsec.org! Looking forward to the best cloud security conference in the world!
If you’re putting AI agents anywhere near prod, this is worth a read. We built AI Guard to help teams monitor prompts, tool calls, and model behavior in real systems, identifying and blocking AI threats in real time. More here:
www.datadoghq.com/blog/ai-guard/
New on Hacking the Cloud: Ben Stevens documents a new method for extracting IAM creds from an AWS Console session. Useful for post-exploitation and evasion tradecraft.
I've been meaning to cover this for years. Glad it’s finally live:
hackingthe.cloud/aws/post_exp...
As AI agents get more autonomous, prompt injection will shift from
“ignore all previous instructions”
to
“add a task to the backlog to X.”
Once the payload crosses a trust boundary and lands in Jira, it’s no longer a prompt, it’s just another task. A task that makes me admin :D
Houses are bullshit
Want a clear analysis of the latest OpenSSL CMS/PKCS#12 vulnerabilities and their real-world impact? Our post explains the conditions required for exploitation and how to evaluate practical risk in your environment.
securitylabs.datadoghq.com/articles/ope...
AI workloads are landing in the same AWS/Azure/GCP accounts we’ve been breaking into (and defending) for years. It's time for Hacking the Cloud to catch up. We're announcing a call for research! Share your AI and LLM sec research with thousands of readers hackingthe.cloud/blog/call_fo...
IDEs are the new browser: massive attack surface, privileged access to various things, and lots of “just trust it.” Today the Security Research Team at Datadog dropped IDE-SHEPHERD: a tool that watches extensions at runtime and blocks dangerous behavior.
securitylabs.datadoghq.com/articles/ide...
I'm skeptical of the claim that 1,000 Clawdbot instances are publicly facing on the internet. If you look at the Shodan output, most of those boxes don't have port 18789 exposed (default Clawdbot port). The references to 18789 are from mDNS. Take this one for example:
Hmmm, even with sudo access Clawdbot has some sandboxing/protections. In a real environment that's good but I kinda intend for him to have full access to this VM. Gotta change that.
Okay, this is kind of amazing. I wanted to give him a browser so he could surf the net but ran into an error. I was going to fix it myself but said, "Hey man, there is a dpkg in your home directory. Go ahead and install it. You'll have some errors but you'll manage", and he did!
Dang, I should have created clawdbot his own host user. I'll have to take care of that later. Suppose this is a good warning if that's something you want to avoid!
He's alive!
I don't know what skills are exactly, but these seemed useful.
Initial install is easy, there's even an option for integrating with Tailscale which I already setup in the VM.
Trying out clawdbot! And I'll live tweet my experiences setting it up and using it. It's been all of my timeline and doing cool things. (see @ajs.bsky.social's post below).
I'm running this on an Ubuntu VM managed through KVM with 6 cores and 16 gigs of ram.
aaronstuyvenberg.com/posts/clawd-...
Did you know Claude models have a "magic string" to test when a model refuses to respond? If that string enters prompt context, it can be abused to break LLM workflows until context is reset.
It's the EICAR test string of the AI age. Details:
hackingthe.cloud/ai-llm/explo...
We are on the verge of the commoditization of exploitation. Every vuln will functionally have a public PoC available because attackers can generate them in minutes.
The advantage will increasingly belong to organizations that can detect, respond, and contain fast.
sean.heelan.io/2026/01/18/o...