Advertisement · 728 × 90
#
Hashtag
#MLSecurity
Advertisement · 728 × 90
Preview
Vulnerable MCP Servers Lab: 9 ways to boost ML security The Vulnerable MCP Servers Lab delivers integration training, demos, and instruction on LLM attack methods. more

ML models are only as strong as the servers behind them. Check out “Vulnerable MCP Servers Lab: 9 ways to boost ML security" and lock down your AI stack: jpmellojr.blogspot.com/2026/02/vuln... #MLSecurity #AppSec #CyberSecurity #AI #MCPlab

0 0 0 0
Post image

#ACSAC Test-of-Time Award

@acsacconf.bsky.social awarded a Test-of-Time Award for "CUJO: Efficient Detection and Prevention of Drive by Download Attacks" (2010). K Rieck, T Krueger, A Dewald

www.bifold.berlin/news-events/...

@rieck.mlsec.org @tuberlin.bsky.social #MLSky #MLSecurity #AI

3 1 1 0
Preview
Small language models step into the fight against phishing sites - Help Net Security Small language models (SLMs) offer new ways to spot phishing on websites and give teams room to improve detection on their own systems.

☝️New research shows SLMS can detect phishing websites with high accuracy — machine learning is becoming a sharper shield against social engineering. 🤖🛡️ #PhishingDetection #MLSecurity

1 0 0 0

Beyond data filtering, real-time fact-checking and architectural improvements in LLMs are vital. Building models inherently more resistant to adversarial inputs is a key challenge for future development. #MLSecurity 5/6

0 0 1 0
Sentry Enables Fast GPU Authentication for ML Artifacts

Sentry Enables Fast GPU Authentication for ML Artifacts

Sentry adds GPU-accelerated cryptographic signing, verifying ML datasets in seconds on a single GPU and achieving orders-of-magnitude speedup versus CPU-only baselines. getnews.me/sentry-enables-fast-gpu-... #sentry #gpu #mlsecurity

0 0 0 0
Li et al.'s "FedCAP: Robust Federated Learning via Customized Aggregation and Personalization"

Li et al.'s "FedCAP: Robust Federated Learning via Customized Aggregation and Personalization"

Launching the session was Li et al.'s "FedCAP: Robust Federated Learning via Customized Aggregation and Personalization," showing a novel solution tackling data heterogeneity and Byzantine threats. (www.acsac.org/2024/p...) 2/6
#MLSecurity #CyberSecurity #AI

0 0 1 0
Ferens et al.'s "Securing PUFs via a Predictive Adversarial ML System by Modeling of Attackers"

Ferens et al.'s "Securing PUFs via a Predictive Adversarial ML System by Modeling of Attackers"

Ending the session, we saw Ferens et al.'s "Securing PUFs via a Predictive Adversarial ML System by Modeling of Attackers" highlighting advances in defending #IoT devices against ML-based #PUF attacks. (www.acsac.org/2024/p...) 6/6
#Cybersecurity #MLSecurity

0 0 0 0
Post image Post image

📢 Machine Learning Security in Practice

Thanks to Kathrin Grosse (IBM Research Zurich) for providing insight into ML vulnerabilities and the process of moving from theory to practice in security!

#RedeCIGUS #FondosEuropeos
#MLSecurity #AI #Cybersecurity #MachineLearning #CiTIUSTalks

1 0 0 0
Preview
NIST Adversarial ML Guidance: How RL Can Secure Your Organization The new NIST guidance identifies the adversarial ML challenges. Here’s why Spectra Assure should be an essential part of your solution.

🤖 New guidance from NIST identifies challenges with #MLsecurity, making it a solid resource. 🤔 However, it doesn't offer a total solution for #SecuringAI: www.reversinglabs.com/blog/nist-ad...

#ML #AI #Cybersecurity

0 0 0 0
https://secure.software/pypi/packages/aliyun-ai-labs-snippets-sdk

⚠️🧵 RL's automated detection system has detected 2 #PyPI packages containing malicious #AI models:

secure.software/pypi/package...

secure.software/pypi/package...

#AISecurity #MLSecurity #Dev

0 0 1 0
Preview
Sensible Mannequin Signing with Sigstore – Mytechnews In partnership with NVIDIA and HiddenLayer, as a part of the Open Supply Safety Basis we at the moment are launching the primary secure model of our mannequin s

🔏 Secure Model Signing Made Simple with Sigstore!

Ensuring ML model integrity just got easier.
#MLSecurity #Sigstore #MachineLearning #DevOps #AI #Cybersecurity #ArdaGuler #Strasbourg #IagoAspas #FCNSCO #Ancelotti #TheVoice #Courtois #RCSAPSG #MayThe4thBeWithYou

www.mytechnews.co/sensible-man...

0 0 0 0
Preview
AI Security in the Cloud: Strategies for Azure and AWS As artificial intelligence (AI) becomes an operational cornerstone across industries, organizations increasingly deploy machine learning (ML) and AI workloads in the cloud. Azure and AWS, the two dominant cloud platforms, offer rich toolsets to support scalable, secure, and compliant AI operations. But with innovation comes risk. AI workloads process sensitive data, often operate autonomously, and can present novel attack vectors if not adequately protected.

Securing AI in the cloud is mission-critical. I break down how to lock down AI workloads in Azure and AWS—from encryption to threat detection. Read the latest on #CloudSecurity via #CloudDailyWire. #AI #AWS #Azure #DevSecOps #Cybersecurity #MLsecurity #CloudComputing

0 0 0 0
Post image

Join Behnaz Karimi and Yuvaraj Govindarajulu at OWASP Global AppSec EU 2025 in Barcelona on May 29!

🔗 Register: owasp.glueup.com/eve...

#OWASP #AppSecEU2025 #Ransomware #AIsecurity #MLsecurity #GenAI #CyberThreats #Barcelona #OWASPAI

0 0 0 0
Preview
Top Techniques for Scam Detection in ML There is an increased risk of scams in the digital world. Businesses and individuals face a significant threat from them. Machine learning, on its part, can provide a powerful tool for dealing with th...

It's high time that you stop relying on traditional fraud detection because you’re playing defense with outdated tools.

Here is the tool you actually need, www.webbuddy.agency/blogs/top-te...

#ScamDetection #MachineLearning #FraudPrevention #CyberSecurity #AI #MLSecurity

0 0 0 0
Preview
Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection Researchers discovered two malicious ML models on Hugging Face exploiting “broken” pickle files to evade detection, bypassing Picklescan safeguards.

🧵 1/7 Breaking: Researchers discovered malicious ML models on Hugging Face using a novel "broken pickle" technique to evade security scanning. Here's the fascinating technical breakdown of how attackers bypassed Picklescan protections... #MLSecurity #AI
thehackernews.com/2025/02/mali...

0 0 0 0
Post image

SLSA and Sigstore are a good first step toward protecting ML models from attack. But they're not a panacea. #AISecurity #MLSecurity #SupplyChainSecurity #Sigstore #SLSA
jpmellojr.blogspot.com/2023/11/how-...

0 0 0 0