You are in danger if you keep working at a slow pace and keep denying the change.
Posts by AbuMuslim (أبومسلِم)
Two days ago, I was talking about how the industry is going to change, and that it is not necessarily a bad change. We work every day to solve problems, but when a solution starts to appear, people panic. I know it may not be perfect, but it will push organizations beyond mere compliance.
For anyone talking about WE SHOULD PATCH QUICKLY. Did you see a backlog before?
One way or another, you will still be relying on OSS in your project. Once that gets compromised, you will be affected too.
People will gain nothing from this other than more gatekeeping and dragging us back to the freakin caves.
Closing the source of a product will not make you safe the way people think. If you look closely, you will notice that adversaries still target products like Microsoft’s in their campaigns.
Was built on open source, using open source, and now we want to go closed source. Reshaping the world for the worse and dragging us back to the 60s and 70s, when gatekeeping was the norm.
Again, If you think cybersecurity is just about raining down vulnerabilities, you do not know anything about cybersecurity.
Finding 1,000 vulnerabilities does not make an organization safe. It means you now have 1,000 things to deal with while the business is still moving.
And that is a different conversation from how organizations actually get hacked.
People still do not get it. We are not denying the AI revolution, and we are not denying that LLMs are reshaping the industry.
What we are calling out is the nonsense. claims not touching the ground, overhyped marketing, and the constant flexing about how many vulnerabilities LLMs can find.
If you think keeping it closed-source will help tackle APTs, you know nothing about APTs.
So the question is not whether LLMs look impressive in a demo. The question is whether they improve signal, coverage, or efficiency enough to justify the cost and operational overhead.
Because the reality is that most serious teams have already invested real time and effort in tuning their SAST, DAST, and fuzzing pipelines.
The other issue is cost.
What does it actually cost to run an ecosystem of agents? How much infrastructure am I paying for? How many tokens will I burn before I get something useful? How much time and effort will it take to tune that setup until it gives results I can trust?
Saying that AI found thousands of vulnerabilities reminds me of someone running Semgrep with default settings on a target, getting 110 findings, and then quietly skipping the part where we ask which of those are real vulnerabilities and which of them are just noise.
What is the signal quality compared to the old-fashioned setup versus the shiny LLM setup? If I already have a standard fuzzing and SAST pipeline in place, why exactly should I go full LLM?
Now back to LLMs.
Benchmarks and comparisons almost always miss the part that actually matters: the confusion matrix. What is the false positive rate? What is the validation burden?
Large-scale automated bug discovery is not new, and neither is the idea of using automation to scale vulnerability research.
Google also released the ClusterFuzz ecosystem for fuzzing and vulnerability discovery, and for the people obsessed with numbers, it has already found a massive number of bugs over the years.
Around the same period, Fuzz4All also used LLMs directly in the fuzzing loop and reported real bug findings. Then Project Zero came with Naptime, and later Big Sleep, which for the record was publicly credited with finding a real impactful issue.
There are a few takes about Mythos that need to be grounded a bit.
Mythos is not the first LLM-based project to move in the direction of vulnerability research. Google had already introduced AI-Powered Fuzzing for OSS-Fuzz in 2023, using LLMs to generate fuzz targets.
Unfortunately, this kind of behavior isn’t something you can easily challenge legally in Egypt. It doesn’t clearly violate any law, so you’re left with no real protection.
To anyone dealing with this kind of discrimination — stay strong.
On my way to resign from a company that doesn’t accept my beard and wants me to shave it.
HR literally told me that having a long beard is “bad hygiene”. This kind of mindset still exists in 2026. The irony? Next slide is always about inclusion and diversity.
Social engineering is still one of the most impactful hacking techniques, and I do not see that changing anytime soon.
Every now and then, an incident happens that reminds you the sharpest weapon a human has is the mind. Looking at this series of supply chain compromises, whether it is TeamPCP or the North Koreans, you cannot miss the frightening beauty of human intelligence.
Grok dropped the database?
Many orgs till today don’t have real security processes or playbooks for incidents. Most of it is paperwork to pass compliance, not actual security.
Boxes get checked, audits pass, and when something breaks everyone panics for a week, then things go back to normal like nothing happened.
What happened to isolation, token protection, canaries, SCA, secure CI/CD pipelines. In 2026 and we are still dealing with the same exact problems we had years ago.
Not the first supply chain attack but another lesson many still didn’t learn.
And just to be clear, this is not about LiteLLM. Using a 3rd party doesn’t remove the responsibility from you. You are still accountable for what you build and how you secure it.
General-purpose agents are killing every offensive agents startup.