Keynote Speakers
Sanjit A. Seshia (UC Berkeley)
Title: Towards a Design Flow for Verified AI-Based Autonomy
Verified artificial intelligence (AI) is the goal of designing AI systems that have strong, ideally provable, assurances of correctness with respect to formally-specified requirements. This talk will review the main challenges to achieving Verified AI, and the progress the research community has made towards this goal. A particular focus will be on AI-based autonomous and semi-autonomous cyber-physical systems (CPS), and on the role of environment/world modeling throughout the design cycle. We argue for developing a new generation of design automation techniques, rooted in formal methods, to enable and support the routine development of high assurance AI-based autonomy. I will describe our work on formal methods for Verified AI-based autonomy, implemented in the open-source Scenic and VerifAI toolkits. The use of these tools will be demonstrated in industrial case studies involving deep learning-based autonomy in ground and air vehicles. We conclude with an outlook to the future of the Verified AI agenda.
Taylor Johnson (Vanderbilt University)
Title: From Neural Network Verification to Formal Verification for Neuro-Symbolic Artificial Intelligence (AI)
The ongoing renaissance in artificial intelligence (AI) has led to the advent of data-driven machine learning (ML) methods deployed within components for sensing, perception, actuation, and control in safety-critical cyber-physical systems (CPS). While these learning-enabled components (LECs) are enabling autonomy in systems such as autonomous vehicles and robots, ensuring such components operate reliably in all scenarios is extraordinarily challenging, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial ML attacks. We will discuss formal methods for assuring specifications---mostly robustness and safety---in autonomous CPS and subcomponents thereof using our software tools NNV (https://github.com/verivital/nnv) and Veritex (https://github.com/verivital/veritex), developed partly in DARPA Assured Autonomy and Assured Neuro Symbolic Learning and Reasoning (ANSR) projects. These methods have been evaluated in CPS development with multiple industry partners in automotive, aerospace, and robotics domains, and allow for formally analyzing neural networks and their usage in closed-loop systems. We will then discuss how these methods are enabling verification for the third wave of AI systems, namely neuro-symbolic systems. We will describe a class of neuro-symbolic systems we have been developing called neuro-symbolic behavior trees (NSBTs), which are behavior trees (a form of hierarchical state machine becoming widely used in robotics) that execute neural networks, and for which we have been developing verification methods implemented in a tool called BehaVerify (https://github.com/verivital/behaverify). We will also discuss relevant ongoing community activities we help organize, such as the Verification of Neural Networks Competition (VNN-COMP) held with the International Conference on Computer-Aided Verification (CAV) the past few years and the Symposium on AI Verification (SAIV) this year, as well as the AI and Neural Network Control Systems (AINNCS) category of the hybrid systems verification competition (ARCH-COMP) also held the past few years. We will conclude with a discussion of future directions in the broader safe and trustworthy AI domain, such as in ongoing projects verifying neural networks used in medical imaging analysis and malware classifiers.