Explaining Architectural Design Tradeoff Spaces: a Machine Learning Approach
Javier Cámara, Mariana Silva,
David Garlan and
Bradley Schmerl.
In Proceedings of the 15th European Conference on Software Architecture, Virtual (Originally, Växjö Sweden), 13-17 September 2021.
Online links: Plain Text
Abstract
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem.
Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging.
While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements.
Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to distinguish the forest through the trees.
In this paper, we present an approach to analyzing architectural design spaces that addresses these limitations and provides a basis to enable the explainability of design tradeoffs.
Our approach combines dimensionality reduction techniques employed in machine learning pipelines with quantitative verification to enable architects to understand how design decisions contribute to the satisfaction of strict quantitative guarantees under uncertainty across the design space.
Our results show feasibility of the approach in two case studies and evidence that dimensionality reduction is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces. |
Keywords: Explainable Software, Machine Learning, Self-adaptation.
|
|