|Evaluating Trade-Offs of Human Involvement in Self-Adaptive Systems|
Gabriel A. Moreno and
In Ivan Mistrik, Nour Ali, John Grundy, Rick Kazman and
Bradley Schmerl editors, Managing Trade-Offs in Self-Adaptive Systems, Elsevier, September 2016.
Online links: Plain Text
|Software systems are increasingly called upon to autonomously manage their goals in changing contexts and environments, and under evolving requirements. In some circumstances, autonomous systems cannot be fully automated but instead cooperate with human operators to maintain and adapt themselves. Furthermore, there are times when a choice should be made between doing a manual or automated repair. Involving operators in self-adaptation should itself be adaptive, and consider aspects such as the training, attention, and ability of operators. Not only do these aspects change from person to person, but they may change with the same person. These aspects make the choice of whether to involve humans non-obvious. Self-adaptive systems should trade-off whether to involve operators, taking these aspects into consideration along with other business qualities it is attempting to achieve. In this chapter, we identify
the various roles that operators can perform in cooperating with self-adaptive systems. We focus on humans as effectors - doing tasks which are difficult or infeasible to automate. We describe how we modified our self-adaptive framework, Rainbow, to involve operators in this way, which involved choosing suitable human models and integrating them into the existing utility trade-off decision models of Rainbow. We use probabilistic modeling and quantitative verification to analyze the trade-offs of involving humans in adaptation, and complement our study with experiments to show how different business preferences and modalities of human involvement may result in different outcomes.|
Keywords: Human-in-the-loop, Self-adaptation, Stochastic Games.