Workshop-01 • Design and Synthesis of Certifiably Safe AI-enabled Cyber Physical Systems with Focus on Human-in-the-Loop Human-in-the Plant Systems Organizors: Dr. Ayan Banerjee, Dr. Sandeep K.S. Gupta, Dr. Imane Lamrani Summary: Advent of Large Language Models (LLM) and generative AI has introduced uncertainty in operation of autonomous systems with significant implications on safe and secure operation. This has led to the US government directive on assurance and testing of trustworthiness of AI. This tutorial aims at introducing the audience to the arising safety issues of AI-enabled autonomous Cyber Physical systems (CPS) and how it affects dependable and safe design for real life deployments. With the advent of LLMs and deep AI methods, CPS are becoming vulnerable to uncertainties. It will introduce a new human in the loop human in the plant design philosophy that is geared towards assured certifiability in presence of human actions and AI uncertainties while reducing data sharing between the CPS manufacturer and certifier. We will provide a landscape of informal and formal approaches in ensuring AI-based CPS safety at every phase of the design lifecycle, defining the gaps, current research to fill those gaps, and tools for detection of commonly occurring software failures such as doping. This tutorial also aims at emphasizing the need for operational safety of AI-based CPS and highlight the importance of explainability at every stage for enhancing trustworthiness. There has been significant research in the domain of model-based engineering that are attempting to solve this design problem. Observations from the deployment of a CPS are used to: a) ascertain whether the CPS used in practice match the proposed safety assured design, b) explain reasons for a mismatch in CPS operation and the safety assured design, c) generate evidence to establish the trustworthiness of a CPS, d) generate novel practical scenarios where a CPS is likely to fail.   |