Ιδρυματικό Αποθετήριο [SANDBOX]
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Binary action search for learning continuous-action control policies

Lagoudakis Michael, Pazis, J.

Πλήρης Εγγραφή


URI: http://purl.tuc.gr/dl/dias/92CF2868-606E-4D95-9777-A01D01FC78A0
Έτος 2009
Τύπος Πλήρης Δημοσίευση σε Συνέδριο
Άδεια Χρήσης
Λεπτομέρειες
Βιβλιογραφική Αναφορά J. Pazis and M. G. Lagoudakis, “Binary action search for learning continuous-action control policies,” in 26th International Conference on Machine Learning (ICML), 2009, pp. 793–800. doi:10.1145/1553374.1553476 https://doi.org/10.1145/1553374.1553476
Εμφανίζεται στις Συλλογές

Περίληψη

Reinforcement Learning methods for controlling stochastic processes typically assume a small and discrete action space. While continuous action spaces are quite common in real-world problems, the most common approach still employed in practice is coarse discretization of the action space. This paper presents a novel method, called Binary Action Search, for realizing continuousaction policies by searching efficiently the entire action range through increment and decrement modifications to the values of the action variables according to an internal binary policy defined over an augmented state space. The proposed approach essentially approximates any continuous action space to arbitrary resolution and can be combined with any discrete-action reinforcement learning algorithm for learning continuous-action policies. Binary Action Search eliminates the restrictive modification steps of Adaptive Action Modification and requires no temporal action locality in the domain. Our approach is coupled with two well-known reinforcement learning algorithms (Least-Squares Policy Iteration and Fitted Q-Iteration) and its use and properties are thoroughly investigated and demonstrated on the continuous state-action Inverted Pendulum, Double Integrator, and Car on the Hill domains.

Υπηρεσίες

Στατιστικά