Home   Research Publications Members Related Software
IndexBrowse   BibliographiesMy selection
 Search: in   (word length ≥ 3)
      Login
Publication no #685   Download bibtex file Type :   Html | Bib | Both
Add to my selection
Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework

Maria Casimiro, Diogo Soares, David Garlan, Luis Rodrigues and Paolo Romano.


In ACM Transactions on Autonomous and Adaptive Systems, March 2024.

Online links: PDF

Abstract
This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as model retraining, which operate at the level of individual ML components. To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (i) the expected performance improvement after adaptation and (ii) the impact of ML adaptation on overall system utility. We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.

Keywords: Machine Learning, Model Checking, Self-adaptation.  
@Article{Casimira:TAAS:2024,
      AUTHOR = {Casimiro, Maria and Soares, Diogo and Garlan, David and Rodrigues, Luis and Romano, Paolo},
      TITLE = {Self-Adapting Machine Learning-based Systems via a Probabilistic Model Checking Framework},
      YEAR = {2024},
      MONTH = {March},
      JOURNAL = {ACM Transactions on Autonomous and Adaptive Systems},
      PDF = {http://acme.able.cs.cmu.edu/pubs/uploads/pdf/ACSOS_TAAS_journal_extension_CR.pdf},
      ABSTRACT = {This paper focuses on the problem of optimizing system utility of Machine-Learning (ML) based systems in the presence of ML mispredictions. This is achieved via the use of self-adaptive systems and through the execution of adaptation tactics, such as model retraining, which operate at the level of individual ML components. To address this problem, we propose a probabilistic modeling framework that reasons about the cost/benefit trade-offs associated with adapting ML components. The key idea of the proposed approach is to decouple the problems of estimating (i) the expected performance improvement after adaptation and (ii) the impact of ML adaptation on overall system utility. We apply the proposed framework to engineer a self-adaptive ML-based fraud-detection system, which we evaluate using a publicly-available, real fraud detection data-set. We initially consider a scenario in which information on model’s quality is immediately available. Next we relax this assumption by integrating (and extending) state-of-the-art techniques for estimating model’s quality in the proposed framework. We show that by predicting the system utility stemming from retraining a ML component, the probabilistic model checker can generate adaptation strategies that are significantly closer to the optimal, as compared against baselines such as periodic or reactive retraining.},
      KEYWORDS = {Machine Learning, Model Checking, Self-adaptation}
}
    Created: 2024-07-12 14:14:42     Modified: 2024-07-12 14:15:21
Feedback: ABLE Webmaster
Last modified: Sat October 12 2019 16:15:32
        BibAdmin