Institutional Repository [SANDBOX]
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Efficient reinforcement learning in adversarial games

Lagoudakis Michael, Skoulakis Ioannis

Full record


URI: http://purl.tuc.gr/dl/dias/3FB923DA-4C00-4B8D-B671-06DDC7E38ACF
Year 2012
Type of Item Conference Full Paper
License
Details
Bibliographic Citation I. Skoulakis and M. G. Lagoudakis, "Efficient Reinforcement Learning in Adversarial Games," in 2012 IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 704 - 711. doi:10.1109/ICTAI.2012.100 https://doi.org/10.1109/ICTAI.2012.100
Appears in Collections

Summary

The ability of learning is critical for agents designed to compete in a variety of two-player, turn-taking, tactical adversarial games, such as Backgammon, Othello/Reversi, Chess, Hex, etc. The mainstream approach to learning in such games consists of updating some state evaluation function usually in a Temporal Difference (TD) sense either under the MiniMax optimality criterion or under optimization against a specific opponent. However, this approach is limited by several factors: (a) updates to the evaluation function are incremental, (b) stored samples from past games cannot be utilized, and (c) the quality of each update depends on the current evaluation function due to bootstrapping. In this paper, we present a learning approach based on the Least-Squares Policy Iteration (LSPI) algorithm that overcomes these limitations by focusing on learning a state-action evaluation function. The key advantage of the proposed approach is that the agent can make batch updates to the evaluation function with any collection of samples, can utilize samples from past games, and can make updates that do not depend on the current evaluation function since there is no bootstrapping. We demonstrate the efficiency of the LSPI agent over the TD agent in the classical board game of Othello/Reversi.

Services

Statistics