Don’t miss Brandon Epstein present the latest #AIUnpacked at #MVS2026 at 1:00PM ET! He’ll be discussing The State of AI at Magnet Forensics, exploring the approaches we take as a company when it comes to AI and how we’re bringing it into our products: ow.ly/iCNX50Yj8za
Join our resident #AI expert, Brandon Epstein, as he kicks off Season 2 of our popular #AIUnpacked series on Jan 21. He'll cut through the #Deepfake hype and will share strategies to ensure your media evidence stands up in court.
Save your spot: ow.ly/Po5O50XY5Pr #DFIR
#AI is transforming #DFIR, but the real story starts with the humans who build the tools. In the #AIUnpacked S2 finale, Brandon Epstein talks with our AI Engineering & Product team about decisions, tradeoffs, and shipping trustworthy AI for investigators:
On Sept 17, join us for a special crossover episode of #AIUnpacked where Brandon Epstein will be joined by Heather and Alexis from the Digital Forensics Now podcast, discussing various viewpoints on the appropriate use of #AI in #DigitalForensics: ow.ly/7UWj50WWTmy #DFIR
On Aug 13, join us for our next episode of #AIUnpacked where Brandon Epstein will be joined by T3K to explore the future of AI in analyzing images and videos to identify pertinent investigative leads. Save your spot here: ow.ly/EF1E50WCPfw #DFIR
If AI can’t explain its decisions, how can we trust it?
Explainability isn’t a luxury — it’s a foundation for transparency, accountability, and human oversight.
XAI isn’t optional. It’s essential.
#XAI #AIUnpacked #ExplainableAI #ResponsibleAI
Join us on June 18 for another exciting installment of #AIUnpacked! In this episode, Brandon Epstein will examine how forensic and legal principles can be applied to the latest in AI technology when used in the pursuit of justice and public safety: ow.ly/RuEk50W9hVe #DFIR
If AI can’t explain itself, can we really trust it?
Explainability isn’t just a technical add-on — it’s essential for accountability, compliance, and human oversight.
XAI is not a luxury. It’s a necessity.
#XAI #ExplainableAI #AIUnpacked #ResponsibleAI
AI Agents: tools or decision-makers?
They act, adapt — even delegate.
This reshapes how we think about autonomy, responsibility, and control.
Who’s really in charge when agents act for us?
#AIUnpacked #AutonomousAgents #ResponsibleAI
AI, privacy, ethics.
What connects them?
✅ My new book is about to be published.
It explores how to balance innovation with AI's conscious and informed use in the neural age.
📘 Stay tuned.
#AIUnpacked #EthicalAI #Privacy #NeuralNetworks #DigitalEthics #AI #artificialintelligence #books
We're excited to continue #AIUnpacked next week (May 14) with Brandon Epstein providing an overview of our approach to AI feature creation! Don't miss this chance to learn more about the considerations involved in deploying cutting edge technologies: ow.ly/3JkW50VPxXp #DFIR
📘 Post 3/4 — Standardization & opt-out
Creators need legal certainty, not guesswork.
🇪🇺 Standard identifiers & opt-out tools (per DSM Directive) are key to balance innovation & rights.
Standards are not bureaucracy — they are fairness at scale.
#AIUnpacked #LawMeetsStandards #DigitalCulture
📘 Post 2/4 — Copyright & transparency
AI can generate — but at what cultural cost?
Europe urges protection of copyright & full disclosure of training data.
Transparency isn’t optional. It’s cultural justice.
#GPAI #Copyright #AIUnpacked #DSMDirective
📘 Post 1/4 — Introduction
🇪🇺 Culture & creativity matter — even in the age of AI.
Italy, Portugal & Spain call for action: GPAI must respect Europe’s cultural fabric.
Copyright, transparency, legal certainty — all on the table.
#AIUnpacked #DigitalCulture #AIAct
Not everything that can be predicted should be.
AI’s power to anticipate behavior risks normalizing surveillance and control.
Prediction isn’t always progress.
Where do you draw the ethical line?
#AIUnpacked #EthicsInAI
Privacy isn’t a bug in AI — it’s a boundary.
When models train on personal data, where does learning end and surveillance begin?
Technical power must meet legal and ethical restraints.
#AIUnpacked #PrivacyMatters #DataProtection
Did you miss the first episode of our new #AIUnpacked series? Catch up on the basics around #AI & #DFIR with our expert, Brandon Epstein, in this on-demand session: ow.ly/CVih50VJNUR
AI needs data. Privacy needs limits.
Consent, minimisation, and purpose limitations hold up in the age of LLMs and foundation models?
When data is fuel, who controls the pipeline?
#AIUnpacked #PrivacyByDesign #DataProtection
AI systems don’t “think” — they optimise.
We must stop projecting human traits onto models.
Anthropomorphising AI confuses users, skews expectations, and distorts accountability.
Precision in language isn’t pedantic. It’s essential.
#AIUnpacked #ResponsibleAI #AI #artificialintelligence
Not all AI is intelligent.
Some models predict well but understand nothing.
Should we call it Artificial Pattern Recognition instead?
Words matter — especially in policy, ethics, and trust.
#AIUnpacked #LanguageMatters
Brandon Epstein delves into Magnet Forensics' new "AI Unpacked" webinar series, offering a deep dive into the realities of AI in digital forensics and how it's shaping the future of investigations. www.forensicfocus.com/podcast/ai-u... #MagnetForenics #AIUnpacked #DigitalForensics #AI
Don't forget to save your spot for our first episode of #AIUnpacked on Wednesday, April 16! Join us as noted #ArtificialIntelligence expert, Brandon Epstein covers the core concepts around AI in #DigitalInvestigations: ow.ly/xFxQ50VA9A2 #DFIR
We're kicking off our brand new #AIUnpacked series in one week! Join us on April 16 as Brandon Epstein offers a primer on AI within #DigitalForensics and how you can be better equipped in your #DigitalInvestigations. Register here: ow.ly/VAjM50VxJrJ #DFIR
AI doesn’t just reflect the world — it can amplify its distortions.
Bias and hallucinations aren’t glitches; they’re mirrors.
Auditing isn’t enough if data or goals are flawed.
How do you address bias and hallucinations in AI?
#AI #AIUnpacked #FairnessByDesign
AI rules are emerging fast, but fragmented.
- EU: AI Act.
- OECD: Principles.
- ISO/IEC: Standards.
Governance is needed.
But so is interoperability.
#AIUnpacked #GoverningAI #AI
(3/3)
Why does ISO/IEC 42001 matter?
✅ Bridges legal & technical governance (e.g., EU AI Act readiness)
✅ Builds internal accountability & trust
✅ Supports interoperability across jurisdictions
Could this become the GDPR moment for AI governance?
Thoughts welcome.
#GoverningAI #AIUnpacked
(1/3)
ISO/IEC 42001:2024 has been officially published.
It’s the first international standard for AI Management Systems (AIMS).
But what does it bring to the table for organizations developing or deploying AI?
A short thread 🧵
#AIUnpacked #LawMeetsStandards
#AI is revolutionizing #DFIR, with exciting advancements happening at a breakneck pace. In our new #AIUnpacked webinar series, Brandon Epstein will break down the latest in AI within #DigitalForensics, helping you make informed decisions when deploying it: ow.ly/Z0sR50Vahlo
We're thrilled to announce a brand new webinar series tackling the latest in #AI within #DFIR: #AIUnpacked with Brandon Epstein! Hear from "Brandon" below about the series.
You can learn more and register for the first episode here: ow.ly/YUNE50V6ZbE