Advertisement · 728 × 90

Posts by Samuel Teuber

Preview
Heterogeneous Dynamic Logic: Provability Modulo Program Theories Formally specifying, let alone verifying, properties of systems involving multiple programming languages is inherently challenging. We introduce Heterogeneous Dynamic Logic (HDL), a framework for comb...

Our paper on Heterogeneous Dynamic Logic has been accepted at #PLDI 2026 🥳

HDL lets you reason about systems mixing different computational models (in Cyber-Physical Systems and beyond) while *reusing* existing proof infrastructure for homogeneous subsystems.

Preprint: arxiv.org/abs/2507.08581

3 days ago 1 0 0 0

Small spoiler on what I will be talking about:

4 weeks ago 2 1 0 0

I‘m excited to present my work on Provably Safe Neural Network Control at @horizonomega.org‘s Guaranteed Safe AI online seminar on April 9th.
The talk will based on my NeurIPS‘24 paper with some updates on what I’ve been up to since :)
Feel free to join if you’re interested:

4 weeks ago 4 2 1 0

Thank you @etapsconf.bsky.social for the great event — so many interesting talks and discussions at #etaps this year!
I also had the opportunity to present my #tacas paper on verifying behavioral equivalence of neural networks 😃

11 months ago 2 0 0 0
Preview
Revisiting Differential Verification: Equivalence Verification with Confidence | Samuel Teuber We introduce a new abstract domain for differential verification using Zonotopes and explore which equivalence properties are ammenable to differential verification. Furthermore, we propose an improve...

Our paper on confidence-based equivalence verification of NNs has just been accepted to #TACAS25 🥳
I'm looking forward to present our new Zonotope-based abstract domain -- we also explored which equivalence properties are more/less amenable to differential verification...
teuber.dev/publication/...

1 year ago 5 0 0 1

Thank you for the pointer, I will definitely have a look at this :)

1 year ago 2 0 1 0

If you're at #NeurIPS this week: I'm organising a "Formal Methods and AI" dinner on Saturday evening.
You can find the details in the Whova App (-> Meet-ups -> Formal Methods and AI Dinner).

1 year ago 6 1 0 0

If you want to learn more about steering Cyber-Physical Systems with Neural Networks and *without* crashes, visit my #NeurIPS poster 4201 Thursday afternoon!

1 year ago 4 1 0 0

This is really cool!
I'm obviously nitpicking here, but I think it's very meta that quite a few papers on Neural Network Verification are right around the "decision boundary" of the Adversarial Robustness cluster...

1 year ago 2 0 0 0
Post image

Perfection

1 year ago 3 0 0 0
Advertisement
Preview
GitHub - samysweb/NCubeV: NCubeV - The Nonlinear Neural Network Verifier NCubeV - The Nonlinear Neural Network Verifier. Contribute to samysweb/NCubeV development by creating an account on GitHub.

Our verification tool for polynomial arithmetic specifications (possibly with prepositionally complicated structure) for (ReLU) neural networks is also open source and can be found here:

1 year ago 0 0 0 0
Preview
Provably Safe Neural Network Controllers via Differential Dynamic Logic While neural networks (NNs) have potential as autonomous controllers for Cyber-Physical Systems, verifying the safety of NN based control systems (NNCSs) poses significant challenges for the practical...

This project was joint work with Stefan Mitsch and André Platzer. We started working on this project during my visit to CMU (SCS) and continued working on it at KIT. I'm looking forward to presenting it at #neurips2024 in Vancouver.
Preprint:

1 year ago 0 0 1 0

However, even to this apriori work, there is an upside:
For many Cyber-Physical Systems, other people have already analyzed the abstract system and we can reuse these control theory results for our analyses (we already did this for VCAS!)

1 year ago 0 0 1 0

Overall, our work proposes an alternative pathway to safe NNCS autonomy that does not rely on simulations for time-bounded guarantees.
Instead, we provide stronger, infinite-time horizon guarantees, but require more apriori reasoning via deductive verification in dL.

1 year ago 0 0 1 0

We also verified an adaptive cruise control system in which an NN is supposed to control the acceleration of a car following another car:
Here we showed that handing over control to the NN will *never* lead to a crash -- no matter how long the two cars follow each other!

1 year ago 0 0 1 0

We also proved safety for two VCAS scenarios. In these cases, we were able to show that (under some assumptions) the NNs' actions will guarantee safety on an infinite time horizon. "Unfortunately", safe systems make for less pretty pictures... 🥲

1 year ago 0 0 1 0
Six examples of the following:
A schematic plot of two planes flying towards each other at equal vertical positions. An orange line indicates the original flight trajectory of the plane on the left, and a blue line indicates an updated trajectory that was proposed by a neural network. The latter would lead to a crash of the two planes or at the very least ot a Near Mid Air Collision.

Six examples of the following: A schematic plot of two planes flying towards each other at equal vertical positions. An orange line indicates the original flight trajectory of the plane on the left, and a blue line indicates an updated trajectory that was proposed by a neural network. The latter would lead to a crash of the two planes or at the very least ot a Near Mid Air Collision.

We applied the overall approach to multiple case studies including Vertical Airborne Collision Avoidance (VCAS):
Here, we analyzed NNs from prior work and found numerous concrete safety problems -- but see for yourself (any plane trajectory in the red region is BAD!):

1 year ago 0 0 1 0

We introduce a framework that:
- Supports polynomial arithmetic in a sound and complete manner
- Significantly simplifies verification for specifications with complicated propositional structure

Importantly, we can *lift* off-the-shelf NN verifiers to this setting!

1 year ago 0 0 1 0
Advertisement

Yes, but only because of our second contribution:
The derived specifications typically contain polynomial (nonlinear) arithmetic and may have a very complicated logical structure.
Meanwhile, SotA NN verifiers only support linear constraints and little more than conjunctions...

1 year ago 0 0 1 0

Our approach comes with many advantages:
- support for continuous-time control
- support for infinite-time safety
- no reliance on simulation which incurs approximation errors
- rigorous formal foundations in dL
- reuse of control theory results

But is it applicable in practice?

1 year ago 0 0 1 0
An overview of the safety guarantee provided by our approach:
We start off with a provably safe control envelope (upper right) and a neural network (lower left). Our objective is to prove the safety of a neural network control system. By encoding the neural network as a program in differential dynamic logic, we derive the program for which we would like to prove safety (lower right). However, reasoning about this program directly is infeasible due to the subsymbolic reasoning of NNs. Instead, we derive a specification for the NN in isolation based on the provably safe control envelope (upper left). By verifying this specification, we simultaneously perform a refinement proof which shows that the neural network control system is safe.

An overview of the safety guarantee provided by our approach: We start off with a provably safe control envelope (upper right) and a neural network (lower left). Our objective is to prove the safety of a neural network control system. By encoding the neural network as a program in differential dynamic logic, we derive the program for which we would like to prove safety (lower right). However, reasoning about this program directly is infeasible due to the subsymbolic reasoning of NNs. Instead, we derive a specification for the NN in isolation based on the provably safe control envelope (upper left). By verifying this specification, we simultaneously perform a refinement proof which shows that the neural network control system is safe.

We start with a dL safety proof for an abstract control envelope. Our approach then derives an NN specification. We rigorously define the NN's semantics in dL via "nondeterministic mirrors".
Verification of the NN is then mirrored by a proof of infinite-time safety in dL.

1 year ago 0 0 1 0

I'm very excited this work is finally public as it brings together two of my favorite topics:
Dynamic Logic and NN Verification.

The first part of the paper explains how models of Cyber-Physical Systems in Differential Dynamic Logic (dL) can be used for NN Verification:

1 year ago 0 0 1 0
A schematic plot of two planes flying towards each other at equal vertical positions. An orange line indicates the original flight trajectory of the plane on the left, and a blue line indicates an updated trajectory that was proposed by a neural network. The latter would lead to a crash of the two planes.

A schematic plot of two planes flying towards each other at equal vertical positions. An orange line indicates the original flight trajectory of the plane on the left, and a blue line indicates an updated trajectory that was proposed by a neural network. The latter would lead to a crash of the two planes.

You want to ensure that your neural network *never* crashes your control system?

Our (now accepted 🥳) #NeurIPS paper introduces:
- Reusing control theory for NN verification
- Verifying *nonlinear arithmetic* specs on NNs

This guarantees your NN won't behave like this (1/12):

1 year ago 13 5 1 2

Hello everyone! I'm giving this platform a shot now after what Twitter has become...

1 year ago 2 0 0 0