Autonomous AI-Enabled Robots need Ethical Guardrails with Guarantees
CAIML Colloquium with Matthias Scheutz.

On March 11, 2025, CAIML Colloquium with Matthias Scheutz from the Tufts University, US took place at TU Wien. The event was co-organized by CAIML and BILAI.
In the presentation, Matthias Scheutz argued that putting foundation models on robots to generate actions without any checks is dangerous and reckless. While foundation models can serve many useful purposes in robotic architectures, they should never be solely in charge of a robot’s behavior due to their risky core operational principle, that of sequence prediction and completion.
Scheutz argued that instead of probabilistic guarantees that come from empirically sampling model behavior, we need firm provable probabilistic guarantees that come from model checking and other formal frameworks in order to guarantee safe robot operation.
About the Speaker
Matthias Scheutz is the Karol Family Applied Technology Professor of computer science in the Department of Computer Science at Tufts University in the School of Engineering, and Director of the Human- Robot Interaction (HRI) Laboratory and the HRI Masters and PhD programs. He has over 450 peer-reviewed publications in artificial intelligence, artificial life, agent-based computing, natural language understanding, cognitive modeling, robotics, human-robot interaction and foundations of cognitive science. His current research focuses on complex ethical AI-enabled robots with natural language interaction, problem-solving, and instruction-based learning capabilities in open worlds.