The Register's reporting shows it wasn't just him — multiple Hungarian officials used "password123" variants for government email. The breach exposed credentials before an election, which raises questions about whether poor password hygiene was incompetence or something more deliberate.
Posts by
The reports say unpatched API, but voter databases shouldn't have unauthenticated endpoints at all. Cambridge Analytica showed what happens when campaign data architecture treats authentication as optional — and that was nearly a decade ago.
undercodetesting.com/trump-vance-...
According to multiple sources, this appears to be Harvard's second breach in recent months — the earlier one via an Oracle zero-day. When credential resets become routine response rather than exception, you're treating symptoms of a deeper authentication problem.
AI agents don't come with birth certificates.
They're spun up, assigned tasks, handed credentials, and left to do their work. Sometimes they're given shared API keys that outlive the projects they were built for. Sometimes…
www.linkedin.com/feed/update/urn:li:activ...
Finding out through a virus email that your data was breached months ago is exactly the transparency gap that makes breaches worse. People need time to protect themselves, not discover it after attackers are already using their data.
Doctors warning their own audiences not to trust them is where we are now. When identity becomes this unreliable, verification has to become structural — not a personal responsibility we outsource to readers.
Banking info, PayPal details, and crypto wallets in one breach is the full fraud toolkit. What's wild is how many platforms still treat financial data and login credentials as if they belong in the same database — every field you store is another recovery path for attackers.
This breach is from 2016. Credentials resurface because they still work — people reuse passwords across services for years. The threat isn't the age of the data, it's the longevity of the behaviour that makes it exploitable.
We've spent 30 years training people to look for the signs of fraud; phishing, suspicious links, grammatical errors. Now the signs are gone. Deepfake technology has reached the threshold where visual and auditory cues are no longer reliable signals of authenticity. www.linkedin.com/feed/update/...
BPO contractors sitting outside your SSO, MFA, and audit logs is the access control gap nobody wants to admit exists — and attackers know it.
22 seconds from initial access to lateral movement means the attack is faster than your detection runbook. Ransomware groups now assume backups exist and actively hunt them — immutable storage isn't optional anymore.
Connecting an MCP server to prod "just to test" is the 2026 version of FTPing the database backup to your laptop. Same impulse, same risk, different acronym.
The clever bit here isn't the fake Azure alert — it's that they're betting on alert fatigue. When legitimate monitoring tools cry wolf constantly, a phishing call feels like just another fire to put out.
Identity verification to stop bots sounds reasonable until you realise it also stops every whistleblower, activist, and abuse survivor who needs anonymity. Reddit's considering it anyway. The cure might be worse than the disease. www.engadget.com/social-media...
The vulnerability is wild, but the real story is how a 71k-star project shipped with zero auth on an RCE vector. This wasn't a subtle logic flaw — it was architectural negligence.
People stop being vulnerable to this kind of account takeover if we verify humans and not stealable or extortable credentials. www.reuters.com/world/europe...
The "theatrical vote" social engineering angle is clever — makes the 8-digit code request seem legitimate. These campaigns work because they hijack trust in the medium itself, not just exploit technical flaws.
The scariest phishing attacks don't fake the infrastructure — they hijack it. Real case IDs, real Apple emails, properly signed. When attackers can trigger legitimate systems to do their work, your security intuition breaks down.
"If it saves billions" and "shown to have effective safeguards" are both statements doing a lot of work. Systems like this prioritise convenience over safety and leave our future selves to pick up the inevitable pieces. Identity as a system of control is always open to misuse.
TSA's rolling out facial recognition at 50+ airports for "faster" security. We're trading friction for surveillance infrastructure, and the pitch is always the same: convenience now, but what are we putting in place for later? www.tsa.gov/touchless-id
The pattern here isn't really "human error beats technical defenses." It's that we keep treating authentication like a technical problem when it's actually a trust distribution problem.
Okta didn't fail because someone clicked the wrong thing. They failed because they gave a third party the keys t…
The interesting thing about the DSA's verification requirements isn't the compliance burden — it's that marketplaces now carry liability for sellers they can't properly verify.
That shifts the economic calculation entirely. Pre-DSA, marketplaces optimized for friction-free onboarding because each…
The thing people miss about these massive breaches is that we keep treating identity verification as a centralization problem when it's actually a verification architecture problem.
When you centralize a billion identity records to "verify" people, you've just built the world's most valuable honey…
We spent decades teaching people to never click suspicious links, share credentials, or trust unexpected requests.
AI assistants just made all of that advice obsolete.
The problem isn't that AI can write convincing…
www.linkedin.com/feed/update/urn:li:activ...