Invited Speakers

David Basin, ETH Zurich

Title: Getting Electronic Payments Right

Abstract:

EMV is the international protocol standard for smartcard payments and is used in billions of payment cards worldwide. Despite the standard’s advertised security, various issues have been previously uncovered, deriving from logical flaws that are hard to spot in EMV’s lengthy and complex specification, running over 2,000 pages.

We have formalized various models of EMV in Tamarin, a symbolic model checker for cryptographic protocols. Tamarin was extremely effective in finding critical flaws, both known and new. For example, we discovered multiple ways that an attacker can use a victim’s EMV card (e.g., Mastercard or Visa Card) for high-valued purchases without the victim’s supposedly required PIN. Said more simply, the PIN on your EMV card is useless! We report on this, as well as followup work with an EMV consortium member on verifying the latest, improved version of the protocol, the EMV Kernel C-8. Overall our work provides evidence that security protocol model checkers like Tamarin have an essential role to play in developing real-world payment protocols and that they are up to this challenge.

Hadas Kress-Gazit, Cornell University

Title: Formal Methods for Robotics and Human-Robot Interaction

Abstract:

What impact have formal methods made on robotics? what unique challenges does robotics bring to formal methods? how do people fit in? In this talk I will give an overview of the use of formal methods for specifying, verifying, and synthesizing robot behavior, and will discuss opportunities and challenges moving forward.

Marta Kwiatkowska, University of Oxford

Title: Adversarial robustness certification for neural networks: progress and challenges

Abstract:

Neural networks have made tremendous advances in the past few years, with their performance justifying deployment in a multitude of application domains, ranging from autonomous systems and robotics to finance. However, neural networks lack robustness to adversarial perturbations, leading to concerns about safety. Using illustrative examples, this lecture will give an overview of certification methodologies currently under development, which aim to provide provable guarantees on safety and robustness of neural network decisions. These draw on convex relaxation, uncertainty quantification and Bayesian methods, and include guarantees against adversarial perturbations and patch attacks, as well as quantitative verification, which aims to quantify the proportion of inputs that satisfy an output specification.

Byron Cook, University College London, AWS

I-DAY/FMICS Keynote Speaker