A Probabilistic Model for Personality Trait Focused Explainability
Mohammed Alharbi,
Shihong Huang and
David Garlan.
In Proceedings of the 4th international Workshop on Context-aware, Autonomous and Smart Architecture (CASA 2021), co-located with the 15th European Conference on Software Architecture, Virtual (Originally Växjö Sweden), 13-17 September 2021.
Online links:
Abstract
Explainability refers to the degree to
which a software system’s actions or solutions can be
understood by humans. Giving humans the right
amount of explanation at the right time is an
important factor in maximizing the effective
collaboration between an adaptive system and
humans during interaction. However, explanations
come with costs, such as the required time of
explanation and humans’ response time. Hence it is
not always clear whether explanations will improve
overall system utility and, if so, how the system should
effectively provide explanation to humans,
particularly given that different humans may benefit
from different amounts and frequency of explanation.
To provide a partial basis for making such decisions,
this paper defines a formal framework that
incorporates human personality traits as one of the
important elements in guiding automated decision-
making about the proper amount of explanation that
should be given to the human to improve the overall
system utility. Specifically, we use probabilistic model
analysis to determine how to utilize explanations in an
effective way. To illustrate our approach, Grid – a
virtual human and system interaction game -- is
developed to represent scenarios for human-systems
collaboration and to demonstrate how a human’s
personality traits can be used as a factor to consider for systems
in providing appropriate explanations. |
Keywords: Explainable Software.
@InProceedings{CASA21,
AUTHOR = {Alharbi, Mohammed and Huang, Shihong and Garlan, David},
TITLE = {A Probabilistic Model for Personality Trait Focused Explainability},
YEAR = {2021},
MONTH = {13-17 September},
BOOKTITLE = {Proceedings of the 4th international Workshop on Context-aware, Autonomous and Smart Architecture (CASA 2021), co-located with the 15th European Conference on Software Architecture},
ADDRESS = {Virtual (Originally V\"axjö Sweden)},
PDF = {http://acme.able.cs.cmu.edu/pubs/uploads/pdf/CASA-paper5.pdf},
ABSTRACT = {Explainability refers to the degree to
which a software system’s actions or solutions can be
understood by humans. Giving humans the right
amount of explanation at the right time is an
important factor in maximizing the effective
collaboration between an adaptive system and
humans during interaction. However, explanations
come with costs, such as the required time of
explanation and humans’ response time. Hence it is
not always clear whether explanations will improve
overall system utility and, if so, how the system should
effectively provide explanation to humans,
particularly given that different humans may benefit
from different amounts and frequency of explanation.
To provide a partial basis for making such decisions,
this paper defines a formal framework that
incorporates human personality traits as one of the
important elements in guiding automated decision-
making about the proper amount of explanation that
should be given to the human to improve the overall
system utility. Specifically, we use probabilistic model
analysis to determine how to utilize explanations in an
effective way. To illustrate our approach, Grid – a
virtual human and system interaction game -- is
developed to represent scenarios for human-systems
collaboration and to demonstrate how a human’s
personality traits can be used as a factor to consider for systems
in providing appropriate explanations.},
KEYWORDS = {Explainable Software} }
|