Embodied
AI (eAI) uses artificial intelligence based on machine learning to
interact with the physical world. We are already seeing eAI
deployed in the real world in robotaxis, smart medical devices,
household robots, and other applications. However, everyone is
struggling with the safety of these devices: how to design for
safety, how to evaluate safety, and how to think about whether any
particular eAI system is acceptably safe.
This talk provides an overview of my new book on this topic, with
robotaxi safety as a concrete example. Anyone working in this area
needs a basic understanding of four core areas: safety
engineering, cybersecurity engineering, machine learning
technology, and human/computer interaction. The talk also
discusses eAI safety issues in the wild, the complexities of
establishing what risks might be acceptable, and open challenges
in eAI safety. A proposal for reimagining safety engineering
responds to the huge disruption that eAI technology creates when
applying traditional computer-based system safety approaches. The
talk finishes with a call to build justifiable trust in eAI
safety.