Posts by Riley
Researchers warn that residential proxies used to route malicious traffic are a big problem for IP reputation systems, as there is no clear distinction between attackers and legitimate users.
Turboquant llama cpp with 1 bit bonsai models is kind of insane. I just got an 8 billion parameter bonsai 1 bit model running at 10 tokens per second with good answers running on an 8gb M1 Macbook pro with enough free ram left over for kde, vivalci, zed, and multiple terminals like it's nothing.
jesus christ i need a new computer and im not sure what to do about it. im not sure i can build these projects with what i got
Bradley's kid is a chip off the block and that apple landed right on the tree and stayed there.
I wish I posted hopeful things, I really do.
In something you don't see everyday, the Apple gave the FBI the real name and email address of one of its customers using Apple's 'Hide My Email' feature. This lets you generate random email addresses to protect your privacy www.404media.co/apple-gives-...
It's the big reason i was "shutter the Olympics" before all of this happened. Which is now of course, doubly so after this decision.
i have problems with the Olympics from a foundational perspective before we even get to trans rights. Have you ever dove into what it costs to be an Olympian? What MOST of them can make after all of those expenditures and loans? it's a cycle of debt and financial ruin when you deep dive that.
so by trying to turn a 8gb 2020 M1 into an ai experiment lab i somehow ended up spending the last 6 hours getting hyprland and quickshell running.
this is how addiction starts
Transformer Shortage Threatens AI Chip Factories
#AI #Semiconductors #InfrastructureBottleneck #AusNews
thedailyperspective.org/article/2026-03-25-trans...
I think the bigger question for us isn't just "can we trust the ai's constitution and safety guardrails?". It's at the very least just as much "Can we trust absolutely every user in our org to be responsible when using an llm". For me the answer is, it depends on which llm, and HELL NO to the second
📅 Delighted to announce that I'll be delivering the keynote at Cybercon Staffordshire on Weds 8 April, at the Wade Conference Centre, Stoke-on-Trent.
I'll be discussing how your AI workforce might actually be your biggest security risk.
Free tickets: www.grahamcluley.com/cybercon
This seems bad
An excerpt from the 2024 book Play Nice: The Rise, Fall, and Future of Blizzard Entertainment, telling the story of how author Andy Weir was fired from Blizzard
This past weekend, the new movie Project Hail Mary was a smash hit, bringing in nearly $141 million at the box office.
But many years ago, before he was writing novels adapted into mega-hit films, Andy Weir was fired from his dream job... at Blizzard Entertainment. Excerpt from my latest book:
This morning's Straylight Sentinel Intelligence Brief for all of my #cybersecurity and #infosec friends may have been late this morning because someone insisted on watching @btsofficialtweets.bsky.social BTS The Comeback Live this morning. But I cannot confirm or deny that.
Hello #cybersecurity and #infosec people. Here's your edition of the Straylight Sentinel Intelligence Report and Podcast.
That moment you realize you just spent 15 hours yesterday building an Ubuntu server on your own time so you could then, wipe that server and install a different operating system today, because, reasons
Thanks to AI-driven exploit dev, no hardware or OS is "secure" by default anymore. We have to move toward friction-based defense. A 24-hour timer is a simple, effective tool to slow down an adversary that never sleeps.
From a vendor standpoint, this also addresses liability. If a user bypasses multiple warnings and waits out a day-long timer to install a malicious file, the OS provider has done its due diligence to prevent a catastrophe.
In this environment, time is the defender's only remaining lever. By forcing a 24-hour wait period for unsigned code, we break the "instant-pwn" cycle. It creates a window for automated Play Protect scans to catch a new signature before it can execute.
The goal for these agents is often the deployment of bespoke malware via custom APKs. Whether a user is social-engineered into a manual install or an exploit chain triggers a remote download, the objective is the same: code execution.
These AI agents operate in "autonomy mode," acting as a 24/7 automated red team. They methodically develop, test, and deploy exploits, systematically cycling through every known disclosure to find a way into a target device.
We are currently tracking multiple APT (Advanced Persistent Threat) groups and state-sponsored actors using LLM-based coding agents. These entities ingest entire vulnerability databases from CISA and ENISA to feed their development pipelines.
Google’s decision to implement a 24-hour timer for non-signed APKs is a necessary response to a massive shift in the threat landscape. As a defender, I see "doing nothing" as a non-starter. Here is why this delay matters from a cybersecurity perspective.
Theres no point