Keynote Speakers
Speaker: Yamine Aït-Ameur
INPT-ENSEEIHT, France
Title: Empowering the Event-B method using external theories
Abstract: Recent Event-B method extensions enable the definition of external theories. These theories are particularly useful for defining generic data types and associated operators, as well as for proving theorems that formalize their properties. They can be used in Event-B developments to model complex systems behaviours as contexts and machines. Axioms and theorems borrowed from these theories are useful to discharge designed models proof obligations and contribute to reduce the proof effort as theorems of the theories are proved once and for all. Event-B theories have been extensively used in our work to formalise several data types in various application domains.
In this talk, we introduce Event-B theories and report on the approach we developed for the systematic use of theories based on well-definedness conditions to automatically generate proof obligations in Event-B formal developments. This approach is illustrated using Event-B modelling of hybrid systems with continuous behaviours, domain knowledge for certification purposes, and meta-modelling Event-B models.
Bio: Yamine AIT AMEUR is Full Professor since 2000. Since Sept 2011, he is at INPT (National Poly-technique Institute) in Toulouse (France) and member of the Toulouse research institute in computing (IRIT-CNRS). Two main important aspects characterize his research activities. On the one hand the fundamental aspects through the development and use of state based formal modelling techniques based on refinement and proofs, in particular Event-B and explicit formalisation of the semantics of domain knowledge using formalised ontology models. On the other, hand, practical aspects, through the development of operational applications, allowing to validate the proposed approaches. Embedded systems in avionics, engineering, interactive systems, CO2 capture are some of the application domains targeted by this work.
Speaker: Roderick Bloem
Graz University of Technology, Austria
Title: Side Channel Secure Software
Abstract: We will present a method to analyze masked software for power side channels. Masking is a technique to hide secrets by duplication and addition of randomness. We will use the Fourier expansion of Boolean functions to find correlations between variables and secrets. If we want to be sure that software is secure, we have to look at the CPU on which it executes. We consider the IBEX RISC-V processor, find vulnerabilities, and fix the CPU.
Bio: Roderick Bloem received his M.Sc. degree in Computer Science from Leiden University, the Netherlands, in 1996, where he took a class on semantics of programming languages with Maurice. He got his Ph.D. degree in Computer Science from the University of Colorado at Boulder in 2001. From 2002 until 2008, he was an Assistant at Graz University of Technology, Graz, Austria. From 2008, he has been a full professor of Computer Science at the same university. He is a co-editor of the Handbook of Model Checking and has published over 100 peer reviewed papers in formal verification, reactive synthesis, Safe AI, and security. He lead the Austrian National Research Network on Rigorous Systems Engineering and has served as PC chair for events including the Computer Aided Verification conference and Formal Methods in Computer Aided Design.
Speaker: Louise Dennis
University of Manchester, UK
Title: Verifying Autonomous Systems
Slides: http://www.cs.man.ac.uk/~dennisl/pubs/iFM_keynote.pdf
Abstract: Autonomous systems are increasingly being used for a range of tasks from exploring dangerous environments, to assistance in our homes. If autonomous systems are to be deployed in such situations then their safety assurance (and certification) must be considered seriously.
Many people are seeking to leverage the power of machine learning to directly link inputs and outputs in a variety of autonomous systems via a statistical model. I will examine an alternative, more modular, approach in which the decision making component of the system is constructed in a way that makes it amenable to formal verification. This approach necessitates an integrated approach to the verification of the whole autonomous system – both in terms of validating assumptions about the way the environment external to the system behaves and in terms of compositional verification of the various modules within the system.
Bio: Louise Dennis is a senior lecturer at the University of Manchester where she leads the Autonomy and Verification group. Her expertise is in the development and verification of autonomous systems with interests in rational agent programming languages, and architectures for autonomous systems, with a particular emphasis on ethical machine reasoning, explainability and creating verifiable systems. Her work on verifiable autonomous system has involved the development of the MCAPL Framework for developing verifiable programming languages.
Louise is currently co-investigator on the UK Trustworthy Autonomous Systems programme Verifiability Node and Computational Agent Responsibility project. She is a member of the IEEE Standards working group for Transparency for Autonomous Systems (P7001), and Technical Committee for Verification of Autonomous Systems Roadmap working group.