Towards Explainable Multi-Objective Probabilistic Planning
Roykrong Sukkerd, Reid Simmons and
David Garlan.
In Proceedings of the 4th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS\'18), Gothenburg, Sweden, 27 May 2018.
Online links:
Abstract
Use of multi-objective probabilistic planning to synthesize
behavior of CPSs can play an important role in engineering
systems that must self-optimize for multiple quality objectives
and operate under uncertainty. However, the reasoning
behind automated planning is opaque to end-users. They
may not understand why a particular behavior is generated,
and therefore not be able to calibrate their confidence in the
systems working properly. To address this problem, we propose
a method to automatically generate verbal explanation
of multi-objective probabilistic planning, that explains why a
particular behavior is generated on the basis of the optimization
objectives. Our explanation method involves describing
objective values of a generated behavior and explaining
any tradeoff made to reconcile competing objectives. We contribute:
(i) an explainable planning representation that facilitates
explanation generation, and (ii) an algorithm for generating
contrastive justification as explanation for why a generated
behavior is best with respect to the planning objectives.
We demonstrate our approach on a mobile robot case study. |
Keywords: Explainable Software, Self-adaptation.
@InProceedings{SEsCPS:Explanation:2018,
AUTHOR = {Sukkerd, Roykrong and Simmons, Reid and Garlan, David},
TITLE = {Towards Explainable Multi-Objective Probabilistic Planning},
YEAR = {2018},
MONTH = {27 May},
BOOKTITLE = {Proceedings of the 4th International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS\'18)},
ADDRESS = {Gothenburg, Sweden},
PDF = {http://acme.able.cs.cmu.edu/pubs/uploads/pdf/ICSE-WS-SEsCPS-13.pdf},
ABSTRACT = {Use of multi-objective probabilistic planning to synthesize
behavior of CPSs can play an important role in engineering
systems that must self-optimize for multiple quality objectives
and operate under uncertainty. However, the reasoning
behind automated planning is opaque to end-users. They
may not understand why a particular behavior is generated,
and therefore not be able to calibrate their confidence in the
systems working properly. To address this problem, we propose
a method to automatically generate verbal explanation
of multi-objective probabilistic planning, that explains why a
particular behavior is generated on the basis of the optimization
objectives. Our explanation method involves describing
objective values of a generated behavior and explaining
any tradeoff made to reconcile competing objectives. We contribute:
(i) an explainable planning representation that facilitates
explanation generation, and (ii) an algorithm for generating
contrastive justification as explanation for why a generated
behavior is best with respect to the planning objectives.
We demonstrate our approach on a mobile robot case study.},
KEYWORDS = {Explainable Software, Self-adaptation} }
|