Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-08T11:26:06.769Z Has data issue: false hasContentIssue false

Building autonomic systems using collaborative reinforcement learning

Published online by Cambridge University Press:  19 October 2006

JIM DOWLING
Affiliation:
Distributed Systems Group, Department of Computer Science, Trinity College, Dublin; e-mail: [email protected], [email protected], [email protected], [email protected]
RAYMOND CUNNINGHAM
Affiliation:
Distributed Systems Group, Department of Computer Science, Trinity College, Dublin; e-mail: [email protected], [email protected], [email protected], [email protected]
EOIN CURRAN
Affiliation:
Distributed Systems Group, Department of Computer Science, Trinity College, Dublin; e-mail: [email protected], [email protected], [email protected], [email protected]
VINNY CAHILL
Affiliation:
Distributed Systems Group, Department of Computer Science, Trinity College, Dublin; e-mail: [email protected], [email protected], [email protected], [email protected]

Abstract

This paper presents Collaborative Reinforcement Learning (CRL), a coordination model for online system optimization in decentralized multi-agent systems. In CRL system optimization problems are represented as a set of discrete optimization problems, each of whose solution cost is minimized by model-based reinforcement learning agents collaborating on their solution. CRL systems can be built to provide autonomic behaviours such as optimizing system performance in an unpredictable environment and adaptation to partial failures. We evaluate CRL using an ad hoc routing protocol that optimizes system routing performance in an unpredictable network environment.

Type
Research Article
Copyright
© 2006 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)