Home   Research Publications Members Related Software
IndexBrowse   BibliographiesMy selection
 Search: in   (word length ≥ 3)
      Login
Publication no #627   Download bibtex file Type :   Html | Bib | Both
Add to my selection
Explanations for Human-on-the-loop: A Probabilistic Model Checking Approach

Nianyu Li, Sridhar Adepu, Eunsuk Kang and David Garlan.


In Proceedings of the 15th International Symposium on Software Engineering for Adaptive and Self-managing Systems (SEAMS), 29 June - 3 July 2020. Talk.

Online links: PDF

Abstract
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.

Keywords: Explainable Software, Model Checking, Self-adaptation.  
@InProceedings{Li:2020:SEAMS-Expl,
      AUTHOR = {Li, Nianyu and Adepu, Sridhar and Kang, Eunsuk and Garlan, David},
      TITLE = {Explanations for Human-on-the-loop: A Probabilistic Model Checking Approach},
      YEAR = {2020},
      MONTH = {29 June - 3 July},
      BOOKTITLE = {Proceedings of the 15th International Symposium on Software Engineering for Adaptive and Self-managing Systems (SEAMS)},
      PDF = {http://acme.able.cs.cmu.edu/pubs/uploads/pdf/SEAMS_CameraReady-8.pdf},
      ABSTRACT = {Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.},
      NOTE = {Talk},
      KEYWORDS = {Explainable Software, Model Checking, Self-adaptation}
}
    Created: 2020-04-20 08:57:20     Modified: 2020-06-25 14:19:31
Feedback: ABLE Webmaster
Last modified: Sat October 12 2019 16:15:32
        BibAdmin