Dagstuhl Seminar 17071 on “Computer-Assisted Engineering for Robotics and Autonomous Systems” combined the research communities of formal methods, robotics, and autonomous systems, such as Unmanned Aerial Systems (UAS), automated control systems, and rovers. The seminar combined foundational talks, formal methods techniques, industrial needs for enabling development of autonomous systems, challenge problems, and many discussions between the research communities.
From the seminar description, “In the absence of external control, it is highly important to make sure that autonomous systems are functionally safe. The more critical system safety is, the more important is to introduce standardized certifications to assure their correct functioning. However, though autonomous systems are increasingly involved in our everyday life, both exact formalizations of safe functionality (standards, what we want to be confident in), and methods to achieve confidence (methodologies, how we get confident in the properties we want to assure) are still scarce and not well integrated.
To move towards the initiation of such standards and the development of such technologies, in this seminar we want to discuss at an abstract level (1) what are general safety requirements for autonomous systems, (2) what are potential key technologies that could be employed to assure that those safety requirements hold, and (3) what are the obstacles on the way to the application of those technologies.”
The seminar was organized by Erika Abraham (RWTH Aachen, DE),
Hadas Kress-Gazit (Cornell University – Ithaca, US),
Lorenzo Natale (Italian Institute of Technology – Genova, IT), and
Armando Tacchella (University of Genova, IT). Slides from the talks, a seminar report, and notes from the presentations are all available online.
Professor Kristin Yvonne Rozier contributed a foundational talk on “Specification: the Biggest Bottleneck in Formal Methods and Autonomy.” The content of the talk was largely based on the VSTTE paper of the same name.
Advancement of autonomous systems stands on the shoulders of formal methods, which make possible the rigorous safety analysis autonomous systems require. An aircraft cannot operate autonomously unless it has design-time reasoning to ensure correct operation of the autopilot and runtime reasoning to ensure system health management, or the ability to detect and respond to off-nominal situations. Formal methods are highly dependent on the specifications over which they reason; there is no escaping the “garbage in, garbage out” reality. Specification is difficult, unglamorous, and arguably the biggest bottleneck facing verification and validation of autonomous systems.
We examine the outlook for formal specification, and highlight the on-going challenges of specification, from design-time to runtime. We exemplify these challenges for specifications in Linear Temporal Logic (LTL) though the focus is not limited to that specification language. We pose challenge questions for specification that will shape both the future of formal methods, and our ability to more automatically verify and validate autonomous systems of greater variety and scale. We call for further research into LTL Genesis.