A Probabilistic Model Checking Approach to Self-Adapting Machine Learning Systems
Maria Casimiro,
David Garlan,
Javier Cámara, Luis Rodrigues and Paolo Romano.
In Proceedings of the Third International Workshop on Automated and verifiable Software sYstem DEvelopment (ASYDE), 6 December 2021.
Online links:
Abstract
Machine Learning (ML) is increasingly used in domains such
as cyber-physical systems and enterprise systems. These systems typically operate in non-static environments, prone to unpredictable changes that can adversely impact the accuracy of the ML models, which are usually in the critical path of the systems. Mispredictions of ML components
can thus affect other components in the system, and ultimately impact overall system utility in non-trivial ways. From this perspective, self-adaptation techniques appear as a natural solution to reason about how to react to environment changes via adaptation tactics that can
potentially improve the quality of ML models (e.g., model retrain), and ultimately maximize system utility. However, adapting ML components is non-trivial, since adaptation tactics have costs and it may not be clear in a given context whether the benefits of ML adaptation outweigh its
costs. In this paper, we present a formal probabilistic framework, based on model checking, that incorporates the essential governing factors for reasoning at an architectural level about adapting ML classifiers in a
system context. The proposed framework can be used in a self-adaptive system to create adaptation strategies that maximize rewards of a multidimensional utility space. Resorting to a running example from the enterprise
systems domain, we show how the proposed framework can be
employed to determine the gains achievable via ML adaptation and to find the boundary that renders adaptation worthwhile. |
Keywords: Machine Learning, Model Checking, Self-adaptation.
@InProceedings{ASDYE2021,
AUTHOR = {Casimiro, Maria and Garlan, David and C\'{a}mara, Javier and Rodrigues, Luis and Romano, Paolo},
TITLE = {A Probabilistic Model Checking Approach to Self-Adapting Machine Learning Systems},
YEAR = {2021},
MONTH = {6 December},
BOOKTITLE = {Proceedings of the Third International Workshop on Automated and verifiable Software sYstem DEvelopment (ASYDE)},
PDF = {http://acme.able.cs.cmu.edu/pubs/uploads/pdf/ASYDE-CR.pdf},
ABSTRACT = {Machine Learning (ML) is increasingly used in domains such
as cyber-physical systems and enterprise systems. These systems typically operate in non-static environments, prone to unpredictable changes that can adversely impact the accuracy of the ML models, which are usually in the critical path of the systems. Mispredictions of ML components
can thus affect other components in the system, and ultimately impact overall system utility in non-trivial ways. From this perspective, self-adaptation techniques appear as a natural solution to reason about how to react to environment changes via adaptation tactics that can
potentially improve the quality of ML models (e.g., model retrain), and ultimately maximize system utility. However, adapting ML components is non-trivial, since adaptation tactics have costs and it may not be clear in a given context whether the benefits of ML adaptation outweigh its
costs. In this paper, we present a formal probabilistic framework, based on model checking, that incorporates the essential governing factors for reasoning at an architectural level about adapting ML classifiers in a
system context. The proposed framework can be used in a self-adaptive system to create adaptation strategies that maximize rewards of a multidimensional utility space. Resorting to a running example from the enterprise
systems domain, we show how the proposed framework can be
employed to determine the gains achievable via ML adaptation and to find the boundary that renders adaptation worthwhile.},
KEYWORDS = {Machine Learning, Model Checking, Self-adaptation} }
|