A Framework for Testimony-Infused Automated Adjudicative Dynamic Multi-Agent Reasoning in Ethically Charged Scenarios
Authors: Brandon Rozek, Michael Giancola, Selmer Bringsjord, Naveen Sundar Govindarajulu,
Conference: International Conference on Robot Ethics and Standards
Publication Date: 2022/07
Abstract: In “high stakes” multi-agent decision-making under uncertainty, testimonial evidence flows from “witness” agents to “adjudicator” agents, where the latter must rationally fix belief and knowl- edge, and act accordingly. The testimonies provided may be incomplete or even deceptive, and in many domains are offered in a context that includes other kinds of evidence, some of which may be incompatible with these testimonies. Therefore, before believing a testimony and on that basis moving forward, the adjudicator must systematically reason to suitable strength of belief, in a manner that takes account of said context, and globally judges the core issue at hand. To fur- ther complicate matters, since the relevant information perceived by the adjudicator changes over time, adjudication is a nonmonontonic/defeasible affair: adjudicators must dynamically strengthen, weaken, defeat, and reinstate belief and knowledge. Toward the engineering of artificial agents ca- pable of handling these representation-and-reasoning demands arising from testimonial evidence in multi-agent decision-making, we explore herein extensions to one of our prior cognitive calculi: the Inductive Cognitive Event Calculus (IDCEC). We ground these extensions in a recent, tragic drone-strike scenario that unfolded in Kabul, Afghanistan, in the hope that use by humans of our brand of logic-based AI in future such scenarios will save human lives.