Since I discuss risks and hype so much, there seems to be some confusion around my perspectives on AI. I’ve tried to document some of my thoughts here. I’m not a skeptic. I’m a critic. perilous.tech/my-perspecti...
Posts by Nathan Hamiel
Outrageous claims are made regarding AI and abundance. Claims so absurd that they have to be true, right? Well, other than the math not making sense, there are lessons we can learn about abundance from Napster. Because when things become free, they are worth nothing.
perilous.tech/what-napster...
Recently, there have been plenty of slot machine analogies applied to AI, but Jacques Ellul made this association back in the 1950s. He observed how humans participate less and less in technological creation and are reduced to mere catalysts.
My cat says no conference calls today!
I may still write about this zero-introspection situation, but this is an excellent article. One omission is that many people attribute to skill what should be attributed to chance. "You’re a 15 second sliding context window." 😆
www.theverge.com/tldr/897566/...
People are losing their jobs to AI, not because of its capabilities, but because of the mere idea of AI. Companies that AI-wash layoffs see short-term gains, but these may translate into long-term losses. Let the great AI washing begin. perilous.tech/the-great-ai...
Time is closing in. The Black Hat USA CFP closes this Friday. For the AI track, we are looking for offensive, defensive, and applied topics. Let's see what you got! Let me know if you have any questions. blackhat.com/call-for-pap...
Does someone need to take Zuck's phone away from him? No nanna, that's not a real Nigerian prince!😆
gizmodo.com/mark-zuckerb...
What we get is manipulation and unintended consequences. Our modern environment is stripping the very defenses we need to stay robust.
Seneca said that the excellence of mind cannot be borrowed or bought. However, that’s exactly what’s being pitched with generative AI. In the end, we don’t get wisdom or AGI from turning books into statistics. perilous.tech/transforming...
The next few years will require vigilance and the ability to envision trade-offs even when no evidence of trade-offs is apparent. These are essential skills in a world that prioritizes dehumanization. This starts with not confusing innovation with progress. perilous.tech/confusing-in...
Absolutely.
Kurzweil lays out this exact same setup in The Singularity Is Nearer. He talks about how having external cloud storage increases memory capacity, and there'll be no difference between brain and cloud processing.
I think the problem is that this is fairly unrealistic in practice. The hacking of your own brain chip setup (assuming it's possible) is something 99% of people on the planet wouldn't do. They'll take the fully working solution with the built-in cloud storage, so they can use the system anywhere.
We need to get much better at envisioning tradeoffs. A symbiosis with AI would mean that we would never know if a thought or memory we have is truly our own. It’s the end of private thoughts and the beginning of a whole new world of manipulation and unintended consequences.
Generative AI is one of the most manipulable technologies ever invented, and shoving it into systems creates an increased attack surface and unintended consequences. The future of warfare is gonna be lit, in some cases literally.
Pretending we’ve achieved AGI and ignoring all of the issues is not an effective control when slapping generative AI into high-risk, safety-critical use cases. While many point to reliability and human responsibility in military use, many aren’t addressing the security aspects.
I first met FX in the early 2000s. We had so many laughs, so many memories. Hell, during just one notorious hacker trip in 2009, there were enough memories to last a lifetime. He will be missed.
Also, I’ve previously written up some observations and guidance to think about when submitting to the AI track at Black Hat. perilous.tech/black-hat-ai...
The Black Hat USA call for papers is open. This will be our 6th year of having a dedicated AI track. If you have some interesting AI research, be it attacking, defending, or applying AI, we’d love to see it. Please let me know if you have any questions. blackhat.com/call-for-pap...
The biggest hot take of the past few weeks is that software is dead. But is it really? Seems there are some fundamental realities not being considered. Regardless of success, software vulnerabilities will be absolutely everywhere. Welcome to the new reality. perilous.tech/the-death-of...
This Clinejection write-up is great, and I learned some things about GitHub actions caching, too. We experienced the same during our research for our Black Hat USA 2025 talk on attacking AI-powered developer productivity tools. adnanthekhan.com/posts/clinej...
If there was a killer use case for this "powerful agentic experience," surely they'd be touting it. But instead we are sold the ability to do things we can already do, just with less security and privacy.
I'll be speaking at Applied Machine Learning Days in Switzerland next week on the topic of AI Secure By Design. I discuss our AI Actor-based threat analysis method to simplify threat identification and get to value quickly.
MoltMatch screenshot
Proof that dudes will engineer systems burning hotter than the sun to avoid actually talking to women. Women, who I imagine are flocking in droves to this site 😆 This is going great! The crypto aspect is the icing on the cake. The trajectory is clear.
Here we continue our technical write-ups of the exploitation of AI-powered developer productivity tools from Black Hat USA with Qodo. The takeaway here is that knowing prompt injection isn’t enough.
kudelskisecurity.com/research/qod...
Neil Postman quote
Literacy is our greatest weapon to remain robust and defend our humanity in this invasive, modern environment. Here, I recommend 7 books to create more robust humans. And yes, Huxley was right.
perilous.tech/7-books-for-...
Hmm... The previous term was terrifying. Where could we look to find something more palatable? I know, dystopian science fiction!!!
The lengths people won't go to get themselves owned. This has been happening since 2023 with AutoGPT, only now with deeper access. This isn't rocket science, if you give something insecure complete and unfettered access to your system and sensitive data, you're going to get owned.
Wow, I said the exact same thing back in 2024 from the stage at AgileDevOps USA. It included the specific number of 14B in losses as well. I was explaining the possibility that OpenAI could go out of business in a few years.