Explanations for Human-on-the-loop: A Probabilistic Model Checking Approach
Nianyu Li,
Sridhar Adepu,
Eunsuk Kang and
David Garlan.
In Proceedings of the 15th International Symposium on Software Engineering for Adaptive and Self-managing Systems (SEAMS), 29 June - 3 July 2020. Talk.
Online links: Plain Text
Abstract
Many self-adaptive systems benefit from human involvement and
oversight, where a human operator can provide expertise not available
to the system and can detect problems that the system is
unaware of. One way of achieving this is by placing the human
operator on the loop – i.e., providing supervisory oversight and
intervening in the case of questionable adaptation decisions. To
make such interaction effective, explanation is sometimes helpful to
allow the human to understand why the system is making certain
decisions and calibrate confidence from the human perspective.
However, explanations come with costs in terms of delayed actions
and the possibility that a human may make a bad judgement. Hence,
it is not always obvious whether explanations will improve overall
utility and, if so, what kinds of explanation to provide to the operator.
In this work, we define a formal framework for reasoning
about explanations of adaptive system behaviors and the conditions
under which they are warranted. Specifically, we characterize
explanations in terms of explanation content, effect, and cost. We
then present a dynamic adaptation approach that leverages a probabilistic
reasoning technique to determine when the explanation
should be used in order to improve overall system utility. |
Keywords: Explainable Software, Model Checking, Self-adaptation.
|
|