Advertisement · 728 × 90

Posts by Suha

A data model for Git (and other docs updates) A data model for Git (and other docs updates)

A data model for Git (and other docs updates) jvns.ca/blog/2026/01...

3 months ago 85 13 2 1

just noticed that more than 10,000 people are subscribed to Saturday Comics, where you get an email every week with a comic from the archives! I think we've been sending out weekly comics for almost 7 years?!?

wizardzines.com/saturday-com...

2 months ago 59 5 0 0
Preview
I want to see the claw I respect quality software and the people who write it. And, I’ve invested years of my life in working on becoming one of these people (even if the journey...

"It has, with generative code, become harder and harder to strive towards the lions because the models produce code that is, quite literally, mid" - beautiful post by @vickiboykis.com

newsletter.vickiboykis.com/archive/i-wa...

5 months ago 30 5 2 1

Starting on this now! I haven't done a big programming thread in a while, so I'm going to try using this thread to post updates throughout the day: mute this if you don't want a flurry of hastily created prototyping filling your feed today! 😅

2 months ago 31 5 3 1

im seeing actual corporate blogs on substack and im like why would you do that to yourself

2 months ago 0 0 0 0
Hi! My name is Suha Sabi Hussain. I'm an AI/ML security engineer.

Feel free to reach out if you wanna chat AI/ML security. Contact info on my website: sshussain.me

7 months ago 1 0 0 0

It was wonderful to help AI/ML security at the company evolve from a summer internship project to an established practice. Not only did I get to work on impactful and interesting audits, research, and engineering projects, but I also got to learn from some truly brilliant people.

7 months ago 2 0 1 0

After a little over 5 years at Trail of Bits, I have decided to move on. I’m exceptionally excited about this new chapter. There’s so much more work to be done in securing AI/ML systems and I’m looking forward to what's ahead.

7 months ago 1 0 1 0
Advertisement
Preview
Weaponizing image scaling against production AI systems In this blog post, we’ll detail how attackers can exploit image scaling on Gemini CLI, Vertex AI Studio, Gemini’s web and API interfaces, Google Assistant, Genspark, and other production AI systems. W...

What if you sent a seemingly harmless image to an LLM and it suddenly exfiltrated your data? Check out our new blog post where we break AI systems by crafting images that reveal prompt injections when downscaled. We’re also releasing a tool to try this attack. blog.trailofbits.com/2025/08/21/w...

8 months ago 1 0 0 0

it delegates to the code execution agent via the orchestrator! delegation is done by the web surfing agent to the orchestrator then to the code execution agent. we should make that sentence less confusing! earlier in the post, mas hijacking is defined as prompt injection targeting MAS control flow.

8 months ago 0 0 0 0

So, we wrote a neural net library entirely in LaTeX...

1 year ago 84 15 3 3
Preview
Clio: Privacy-preserving insights into real-world AI use A blog post describing Anthropic’s new system, Clio, for analyzing how people use AI while maintaining their privacy

KNN + topic detection getting a big glow-up www.anthropic.com/research/clio

1 year ago 51 9 3 1
Advent of Papers (2024)

Rather than trying to do advent of code, I'm doing advent of papers!
jimmyhmiller.github.io/advent-of-pa...

Hopefully I can read and share some of weirder computer related papers.

First paper is Elephant 2000 by John McCarthy. Did you know he didn't just make lisp? Wonderful paper, worth a read.

1 year ago 95 25 5 2

trying to explain the OSI model to an american: imagine if a burger had 7 patties

1 year ago 3128 232 177 30
Preview
Discrepancy between what's in GitHub and what's been published to PyPI for v8.3.41 · Issue #18027 · ultralytics/ultralytics Bug Code in the published wheel 8.3.41 is not what's in GitHub and appears to invoke mining. Users of ultralytics who install 8.3.41 will unknowingly execute an xmrig miner. Examining the file util...

(someone used a carefully crafted branch name to inject a crypto miner into a popular Python package: github.com/ultralytics/...)

1 year ago 245 55 5 8
Advertisement
Preview
What To Use Instead of PGP - Dhole Moments It’s been more than five years since The PGP Problem was published, and I still hear from people who believe that using PGP (whether GnuPG or another OpenPGP implementation) is a thing they s…

Someone tried to reply to my blog post about avoiding PGP with anti-furry hate, so now I have to edit it to include more furry stickers.

soatok.blog/2024/11/15/w...

1 year ago 67 18 9 1
Preview
Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces…

Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits

2 years ago 13 5 0 0

My team at Trail of Bits added modules for modular analysis, polyglots, and PyTorch to Fickling, a pickle security tool tailored for ML use cases.

Fun Fact: Fickling can now differentiate and identify the various PyTorch file formats out there.

blog.trailofbits.com/2024/03/04/r...

2 years ago 2 0 1 0
Preview
A Flaw in Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data Patching every device affected by the LeftoverLocals vulnerability—which includes some iPhones, iPads, and Macs—may prove difficult.

Thinking about Dan Kaminsky's quote this morning about the necessary lies we tell ourselves about computers. Specifically, the myth of boundaries between users. Great write-up by @lhn.bsky.social on the "LeftoverLocals" GPU vuln. Nice work by the Trail of Bits team.

2 years ago 27 9 0 0

Specifically, int.to_bytes and int.from_bytes default to big-endian, since py3.11. Previously, you had to explicitly specify which you wanted.

I wanted LE but forgot to specify, and my code failed in really non-obvious ways...

2 years ago 4 2 2 0
Preview
Assessing the security posture of a widely used vision model: YOLOv7 By Alvin Crighton, Anusha Ghosh, Suha Hussain, Heidy Khlaaf, and Jim Miller TL;DR: We identified 11 security vulnerabilities in YOLOv7, a popular computer vision framework, that could enable attack…

I got to work on a security review of the YOLOv7 vision model. The blog post and report are out now!

Fun fact: There are TorchScript model differentials!

blog.trailofbits.com/2023/11/15/a...

2 years ago 3 0 0 0
Hack.lu 2023: Do's And Don'ts In File Formats - Ange Albertini
Hack.lu 2023: Do's And Don'ts In File Formats - Ange Albertini

I presented at HackLu about oddities of existing file formats and lessons learned along the way.
Consider it a teaser, as I presented 1/3 of the slide deck (to be released soon).
www.youtube.com/watch?v=6OJ9...

2 years ago 8 4 0 0
Advertisement

Neopets taught so many kids how to code, but it taught me how to hack the system by creating multiple accounts and transferring items just up to the limit where you wouldn’t get caught. And anyway, today I’m a cyber lawyer.

2 years ago 19 4 2 0
Tweet from Mike Conover with a slide listing Top information sources for Al Engineers, courtesy of @barrmanas & @AmplifyPartners. | NEWSLETTERS 1. Import AI 2. arXiv roundup 3. The Batch PODCASTS 1. Latent Space 2. Gradient Descent 3. The Cognitive Revolution 4. The Gradient COMMUNITIES 1. Hacker News 2. OpenAI Discord 3. LangChain Discord 4. HuggingFace discussions

Tweet from Mike Conover with a slide listing Top information sources for Al Engineers, courtesy of @barrmanas & @AmplifyPartners. | NEWSLETTERS 1. Import AI 2. arXiv roundup 3. The Batch PODCASTS 1. Latent Space 2. Gradient Descent 3. The Cognitive Revolution 4. The Gradient COMMUNITIES 1. Hacker News 2. OpenAI Discord 3. LangChain Discord 4. HuggingFace discussions

These lists may be useful for those of us trying to develop an alternative to ML Twitter, now that it's 40% influencer spam and 20% a war between sci-fi subcultures. I'm on some of these discords and reading some of these newsletters, but I think I'll add 2 or 3 more. #MLsky #cssky

2 years ago 14 2 2 0

Enormous thank you to PyData Amsterdam for inviting me to keynote at a beautiful venue! Slides and notes from my talk, "Build and keep your context window" are all here: vickiboykis.com/2023/09/13/b...

2 years ago 35 7 0 0
See https://www.explainxkcd.com/wiki/index.php/2044:_Sandboxing_Cycle#Transcript

See https://www.explainxkcd.com/wiki/index.php/2044:_Sandboxing_Cycle#Transcript

I think about this a lot xkcd.com/2044/

2 years ago 140 22 5 0
Preview
Tools for Verifying Neural Models' Training Data It is important that consumers and regulators can verify the provenance of large neural models to evaluate their capabilities and risks. We introduce the concept of a "Proof-of-Training-Data": any...

ICYMI: This is **critical** work for AI ethics / safety / security / regulation right now: Verifying that a model is fitted on a given dataset.
https://arxiv.org/abs/2307.00682

2 years ago 10 2 0 0

I’ve conjectured this for years, but seeing Papernot and Shumailov on the paper makes me feel really confident in the findings: https://arxiv.org/abs/2305.17493

Existential risk 🙄🙄🙄🙄

2 years ago 4 1 0 0
Screenshot of a tweet from @ huggingface on twitter; text reads:
"We are looking into an incident where a malicious user took control over the Hub organizations of Meta/Facebook & Intel via reused employee passwords that were compromised in a data breach on another site. We will keep you updated 🤗"

Screenshot of a tweet from @ huggingface on twitter; text reads: "We are looking into an incident where a malicious user took control over the Hub organizations of Meta/Facebook & Intel via reused employee passwords that were compromised in a data breach on another site. We will keep you updated 🤗"

Post image Post image

So remember the "mango pudding" LLM backdooring attack? How safe do you feel using these models now?

2 years ago 2 1 1 0