Join #UofT at #TorontoTechWeek on May 26 for the Desjardins Speaker Series with Reynold Xin, Co‑founder of Databricks. Reynold will share how his team built Databricks from an academic research project into a business with a global footprint. Register now: uoft.me/ttw-uoft-2026
Posts by Vector Institute
Thank you to our co-hosts University of Oxford, Laboratory for AI Security Research (LASR), and SRI, all of our speakers, presenters, and participants from the Vector community and the UK who contributed to these discussions.
The conversations won’t end here.
What stood out most: the strength of Canada–UK collaboration 🇨🇦🇬🇧
Researchers from Vector Institute, Oxford, SRI, and beyond came together not just to share work, but to define what comes next.
The morning also highlighted opportunities to support this work, with an overview of funding pathways from Canada’s National Research Council: there is growing support for advancing AI safety and security research – particularly through partnerships across academia, industry, and government.
+ AI is expanding – not replacing – existing risks. As systems become more interconnected, vulnerabilities scale
+ And while technical safeguards matter, they aren’t enough on their own. Education, regulation, and human awareness will play a critical role
This morning’s panel with Tom Lovett and Sam Cohen (Oxford), Sheila McIlraith (Vector Faculty), and David Lie (Vector Faculty Affiliate) explored how AI is reshaping the security landscape as it becomes embedded in critical infrastructure.
Two insights stood out:
The mathematical foundations of AI security are still being written – and this week, that work was deeply collaborative
A quick recap of Day 3 and the close of the Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute:
Today’s focus was on what comes next
🧵
“I predict the AI Scientist actually marks the dawn of a new era of rapid scientific advances,” UBC computer scientist Dr. Jeff Clune claims, imagining humans reduced to curators witnessing #AI achieve scientific wonders.
@cs.ubc.ca @vectorinstitute.ai
www.scientificamerican.com/article/ai-w...
Connall Garrod (Oxford): "Cross-entropy dynamics, the Ha(dama)rd way: Diagonalizing the softmax"
Exploring mathematical structures governing cross-entropy loss dynamics through Hadamard products
These are the theoretical tools for understanding why deep learning succeeds across applications
Vardan Papyan (Vector Faculty): "Do tokens collapse in transformers and should they?"
Recent theoretical works predicted that transformers progressively collapse token representations within sequences as depth increases
Papyan’s empirical work challenges this in supervised learning settings
The mathematical foundations of AI security remain contested, but that’s not a failure – it’s the current state of the field.
Two final presentations closed out Day 2 of our Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute
🧵
Nisarg Shah (Vector Faculty Affiliate): "Modern foundations of fair artificial intelligence"
Traditional fairness approaches that subscribe to monolithic views struggle with non-binary outcomes and ignore user preferences
The alternative: fairness frameworks grounded in democratic principles
Rico Angell (NYU): "Jailbreak transferability emerges from shared representations"
Angell examined 20 open-weight models against 33 jailbreak attacks
The finding: jailbreak transferability emerges from shared representations, not incidental flaws
Nandita Vijaykumar (Vector Faculty): "The surprising interplay between performance optimization and privacy in AI systems"
How performance optimizations can inadvertently create privacy vulnerabilities, and how privacy mechanisms can unexpectedly affect performance
Day 2 of our Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute continues
The emerging theme: security challenges don’t exist in isolation, but come from interactions between system layers.
🧵
Hassan Ashtiani (Vector Faculty): “Black-box reductions from private to non-private learning”
Creating bridges between privacy-preserving and standard ML
Stable learners can be made private – this enables reuse of sophisticated non-private methods while maintaining differential privacy guarantees
Sheila McIlraith (Vector Faculty): “How (formal) language can help AI agents learn, plan, and remember”
How formal languages can help AI agents generalize and transfer knowledge to new situations, and how reward machines with counterfactual experiences dramatically improves efficiency
Tom Lovett (Oxford): "(An attempt at) defining some mathematics of AI security"
Defining AI security across 3 threat models, loosely grouped where interesting mathematical questions lie:
+ Model security
+ Data security
+ Learning security
Tom Lovett's presentation title captured the challenge: "(An attempt at) defining some mathematics of AI security."
That framing set the tone for this morning's opening sessions at Day 2 the Foundations of AI Security Symposium – rigorous inquiry into problems that lack clean answers yet:
🧵
The breadth of approaches signals that AI security isn't a single problem, but an ecosystem of interconnected mathematical challenges.
Stay tuned for more insights from the Symposium. The mathematical foundations of AI security are still being written, and we’re writing them together.
The day wrapped with a series of lightning talks from emerging researchers working on everything from GPM backdoors in differential privacy to fair clustering algorithms and AI cyber offense evaluation.
Michael Menart (Vector Postdoc/University of Toronto) demonstrated why DP-SGD is necessarily slow, revealing the fundamental runtime cost of privacy-preserving mechanisms—a dimension-dependent penalty minimized only at specific batch-size thresholds.
Ruth Urner (Vector Faculty/York University) dove into PAC learning for adversarial robustness. Her key insight: VC classes are adversarially robustly learnable, but only improperly.
The dimension-dependent runtime penalty is real, and it fundamentally changes how we think about private training.
Sam Cohen (Oxford) explored optimal control theory and how machine learning provides new techniques to address the curse of dimensionality in continuous-time problems.
The challenge? Naive ML applications can fail in unexpected ways when security is on the line.
Day 1 brought together doctoral researchers and faculty for intensive tutorials on the mathematical foundations underlying AI security, to understand the theoretical bedrock that will shape the next decade of research.
3 tutorials set the stage:
Can we mathematically define AI security?
That’s the question we tackled today, Day 1 of our Foundations of AI Security Symposium – a Canada-UK research collaboration co-hosted with University of Oxford and the Schwartz Reisman Institute.
Event graphic for the C.C. “Kelly” Gottlieb Distinguished Lecture featuring Yejin Choi, titled “The Art of (Artificial) Reasoning,” at the University of Toronto.
MacArthur Fellow and leading AI researcher Yejin Choi delivers The Art of (Artificial) Reasoning on April 16 as part of our Distinguished Lecture Series with the @vectorinstitute.ai. Hear insights on AI reasoning, model limitations and new approaches beyond scaling.
Register: bit.ly/dls-yejin-choi
Remarkable 2026 complete! 🌟
Thanks to our workshop leaders, 59 poster presenters & incredible community who brought the collaborative energy that drives Canadian AI leadership
The conversations continue → innovations we can't imagine yet 🚀
Thank you all for making Remarkable 2026 truly remarkable
🔬 50+ research posters at #Remarkable2026, spanning:
+ Voice-controlled surgery for global health
+ Ethical AI mental health support
+ Language model bias detection
+ Clean energy catalyst design
+ & many more
The future of responsible AI innovation is unfolding here 🚀
#AICanBeRemarkable
🧠 LLM Frontiers – manipulation and stagnation issues, and how to bridge the “trust deficit” of enterprise adoption through AI-reasoning with human-validation workflows.
Canadian AI leadership = shared industry/research expertise in action. 🤝
#Remarkable2026 #AICanBeRemarkable