Advertisement · 728 × 90

Posts by Vector Institute

Post image

Join #UofT at #TorontoTechWeek on May 26 for the Desjardins Speaker Series with Reynold Xin, Co‑founder of Databricks. Reynold will share how his team built Databricks from an academic research project into a business with a global footprint. Register now: uoft.me/ttw-uoft-2026

1 week ago 1 3 0 0

Thank you to our co-hosts University of Oxford, Laboratory for AI Security Research (LASR), and SRI, all of our speakers, presenters, and participants from the Vector community and the UK who contributed to these discussions.

The conversations won’t end here.

2 weeks ago 0 0 0 0

What stood out most: the strength of Canada–UK collaboration 🇨🇦🇬🇧

Researchers from Vector Institute, Oxford, SRI, and beyond came together not just to share work, but to define what comes next.

2 weeks ago 0 0 1 0

The morning also highlighted opportunities to support this work, with an overview of funding pathways from Canada’s National Research Council: there is growing support for advancing AI safety and security research – particularly through partnerships across academia, industry, and government.

2 weeks ago 0 0 1 0

+ AI is expanding – not replacing – existing risks. As systems become more interconnected, vulnerabilities scale

+ And while technical safeguards matter, they aren’t enough on their own. Education, regulation, and human awareness will play a critical role

2 weeks ago 0 0 1 0

This morning’s panel with Tom Lovett and Sam Cohen (Oxford), Sheila McIlraith (Vector Faculty), and David Lie (Vector Faculty Affiliate) explored how AI is reshaping the security landscape as it becomes embedded in critical infrastructure.

Two insights stood out:

2 weeks ago 0 0 1 0
Post image

The mathematical foundations of AI security are still being written – and this week, that work was deeply collaborative

A quick recap of Day 3 and the close of the Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute:

Today’s focus was on what comes next

🧵

2 weeks ago 0 0 1 0
Advertisement
Preview
AI wrote a scientific paper that passed peer review The arrival of AI-generated research papers marks a turning point that could radically accelerate discovery—or drown it in automated mediocrity

“I predict the AI Scientist actually marks the dawn of a new era of rapid scientific advances,” UBC computer scientist Dr. Jeff Clune claims, imagining humans reduced to curators witnessing #AI achieve scientific wonders.
@cs.ubc.ca @vectorinstitute.ai

www.scientificamerican.com/article/ai-w...

2 weeks ago 4 2 1 1

Connall Garrod (Oxford): "Cross-entropy dynamics, the Ha(dama)rd way: Diagonalizing the softmax"

Exploring mathematical structures governing cross-entropy loss dynamics through Hadamard products

These are the theoretical tools for understanding why deep learning succeeds across applications

2 weeks ago 0 0 0 0

Vardan Papyan (Vector Faculty): "Do tokens collapse in transformers and should they?"

Recent theoretical works predicted that transformers progressively collapse token representations within sequences as depth increases

Papyan’s empirical work challenges this in supervised learning settings

2 weeks ago 0 0 1 0
Post image Post image Post image Post image

The mathematical foundations of AI security remain contested, but that’s not a failure – it’s the current state of the field.

Two final presentations closed out Day 2 of our Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute

🧵

2 weeks ago 0 0 1 0

Nisarg Shah (Vector Faculty Affiliate): "Modern foundations of fair artificial intelligence"

Traditional fairness approaches that subscribe to monolithic views struggle with non-binary outcomes and ignore user preferences

The alternative: fairness frameworks grounded in democratic principles

2 weeks ago 0 0 0 0

Rico Angell (NYU): "Jailbreak transferability emerges from shared representations"

Angell examined 20 open-weight models against 33 jailbreak attacks

The finding: jailbreak transferability emerges from shared representations, not incidental flaws

2 weeks ago 0 0 1 0

Nandita Vijaykumar (Vector Faculty): "The surprising interplay between performance optimization and privacy in AI systems"

How performance optimizations can inadvertently create privacy vulnerabilities, and how privacy mechanisms can unexpectedly affect performance

2 weeks ago 0 0 1 0
Post image Post image Post image

Day 2 of our Foundations of AI Security Symposium with University of Oxford and Schwartz Reisman Institute continues

The emerging theme: security challenges don’t exist in isolation, but come from interactions between system layers.

🧵

2 weeks ago 0 0 1 0
Advertisement

Hassan Ashtiani (Vector Faculty): “Black-box reductions from private to non-private learning”

Creating bridges between privacy-preserving and standard ML

Stable learners can be made private – this enables reuse of sophisticated non-private methods while maintaining differential privacy guarantees

2 weeks ago 0 0 0 0

Sheila McIlraith (Vector Faculty): “How (formal) language can help AI agents learn, plan, and remember”

How formal languages can help AI agents generalize and transfer knowledge to new situations, and how reward machines with counterfactual experiences dramatically improves efficiency

2 weeks ago 0 0 1 0

Tom Lovett (Oxford): "(An attempt at) defining some mathematics of AI security"

Defining AI security across 3 threat models, loosely grouped where interesting mathematical questions lie:
+ Model security
+ Data security
+ Learning security

2 weeks ago 0 0 1 0
Post image Post image Post image Post image

Tom Lovett's presentation title captured the challenge: "(An attempt at) defining some mathematics of AI security."

That framing set the tone for this morning's opening sessions at Day 2 the Foundations of AI Security Symposium – rigorous inquiry into problems that lack clean answers yet:

🧵

2 weeks ago 0 0 1 0

The breadth of approaches signals that AI security isn't a single problem, but an ecosystem of interconnected mathematical challenges.

Stay tuned for more insights from the Symposium. The mathematical foundations of AI security are still being written, and we’re writing them together.

2 weeks ago 0 0 0 0

The day wrapped with a series of lightning talks from emerging researchers working on everything from GPM backdoors in differential privacy to fair clustering algorithms and AI cyber offense evaluation.

2 weeks ago 0 0 1 0

Michael Menart (Vector Postdoc/University of Toronto) demonstrated why DP-SGD is necessarily slow, revealing the fundamental runtime cost of privacy-preserving mechanisms—a dimension-dependent penalty minimized only at specific batch-size thresholds.

2 weeks ago 0 0 1 0
Advertisement

Ruth Urner (Vector Faculty/York University) dove into PAC learning for adversarial robustness. Her key insight: VC classes are adversarially robustly learnable, but only improperly.

The dimension-dependent runtime penalty is real, and it fundamentally changes how we think about private training.

2 weeks ago 0 0 1 0

Sam Cohen (Oxford) explored optimal control theory and how machine learning provides new techniques to address the curse of dimensionality in continuous-time problems.

The challenge? Naive ML applications can fail in unexpected ways when security is on the line.

2 weeks ago 0 0 1 0

Day 1 brought together doctoral researchers and faculty for intensive tutorials on the mathematical foundations underlying AI security, to understand the theoretical bedrock that will shape the next decade of research.

3 tutorials set the stage:

2 weeks ago 0 0 1 0
Post image Post image Post image Post image

Can we mathematically define AI security?

That’s the question we tackled today, Day 1 of our Foundations of AI Security Symposium – a Canada-UK research collaboration co-hosted with University of Oxford and the Schwartz Reisman Institute.

2 weeks ago 0 0 1 0
Event graphic for the C.C. “Kelly” Gottlieb Distinguished Lecture featuring Yejin Choi, titled “The Art of (Artificial) Reasoning,” at the University of Toronto.

Event graphic for the C.C. “Kelly” Gottlieb Distinguished Lecture featuring Yejin Choi, titled “The Art of (Artificial) Reasoning,” at the University of Toronto.

MacArthur Fellow and leading AI researcher Yejin Choi delivers The Art of (Artificial) Reasoning on April 16 as part of our Distinguished Lecture Series with the @vectorinstitute.ai. Hear insights on AI reasoning, model limitations and new approaches beyond scaling.

Register: bit.ly/dls-yejin-choi

3 weeks ago 3 2 1 0
Post image Post image Post image

Remarkable 2026 complete! 🌟

Thanks to our workshop leaders, 59 poster presenters & incredible community who brought the collaborative energy that drives Canadian AI leadership

The conversations continue → innovations we can't imagine yet 🚀

Thank you all for making Remarkable 2026 truly remarkable

1 month ago 0 0 0 0
Post image Post image Post image

🔬 50+ research posters at #Remarkable2026, spanning:

+ Voice-controlled surgery for global health
+ Ethical AI mental health support
+ Language model bias detection
+ Clean energy catalyst design
+ & many more

The future of responsible AI innovation is unfolding here 🚀

#AICanBeRemarkable

1 month ago 2 0 0 0

🧠 LLM Frontiers – manipulation and stagnation issues, and how to bridge the “trust deficit” of enterprise adoption through AI-reasoning with human-validation workflows.

Canadian AI leadership = shared industry/research expertise in action. 🤝

#Remarkable2026 #AICanBeRemarkable

1 month ago 1 0 0 0
Advertisement