Swinburne
Browse

Dynamic analysis of multiagent Q-learning with E-greedy exploration

Download (253.86 kB)
conference contribution
posted on 2024-07-09, 20:30 authored by Eduardo Gomes, Ryszard Kowalczyk
The development of mechanisms to understand and model the expected behaviour of multiagent learners is becoming increasingly important as the area rapidly find application in a variety of domains. In this paper we present a framework to model the behaviour of Q-learning agents using the o-greedy exploration mechanism. For this, we analyse a continuous-time version of the Q-learning update rule and study how the presence of other agents and the o-greedy mechanism affect it. We then model the problem as a system of difference equations which is used to theoretically analyse the expected behaviour of the agents. The applicability of the framework is tested through experiments in typical games selected from the literature.

History

Available versions

PDF (Accepted manuscript)

ISBN

9781605585161

Journal title

Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009), Montreal, Canada, 14-18 June 2009

Conference name

The 26th Annual International Conference on Machine Learning ICML 2009, Montreal, Canada, 14-18 June 2009

Volume

382

Pagination

7 pp

Publisher

ACM

Copyright statement

Copyright © 2009 ACM. This the accepted manuscript of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in the Proceedings of the 26th Annual International Conference on Machine Learning (ICML 2009), Montreal, Canada, 14-18 June 2009.(Vol. 382, pp. 369-376). http://doi.acm.org/10.1145/1553374.1553422.

Language

eng

Usage metrics

    Publications

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC