Achieving Logical and Temporal Enforcement for Autonomous Systems
Thursday, October 08, 2020: 10:30 AM - 11:15 AM
The safety of autonomous systems depends on the correctness of their software, which, in turn, depends on whether the software computes the right result (logical correctness) and computes it at the right time (temporal correctness). For software in autonomous systems, achieving each of the two types of correctness is challenging and achieving both of them is even more so. The reason is that software in autonomous systems tends to (i) have complex control flow, (ii) have a dependency on the physical environment (Cyber-Physical Systems), (iii) have large state spaces, and (iv) execute on computer platforms for which documentation is not public, incomplete, or wrong. It would be desirable to prove before an autonomous system is put into operation, that it will perform safely. But because of the aforementioned reasons, this is very difficult (or perhaps impossible). For this reason, an appealing approach is to introduce enforcers. An enforcer is a small piece of code that is given a certain desired or required correctness property of the autonomous system (e.g., the altitude should be greater than a certain bound). Since the enforcer is small, its implementation (for example, in the C programming language) can be formally verified (using FRAMA-C, for example). Furthermore, at run time, if the normal application software chooses to perform an unsafe operation, then the enforcer overrides the action. This provides the potential to achieve the twin objectives in autonomous systems of allowing the use of complex third-party software and use COTS hardware while simultaneously achieving safety. Previous work (by us and others) have focused on enforcers for logical correctness. In this paper and presentation, we will present enforcers for both logical correctness and timing correctness, and an architecture that combines the two types of correctness.
Engineering/Technical,Research & Development