Ιδρυματικό Αποθετήριο [SANDBOX]
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Directed exploration of policy space using support vector classifiers

Rexakis Ioannis, Lagoudakis Michael

Πλήρης Εγγραφή


URI: http://purl.tuc.gr/dl/dias/7D85C6DD-512E-4EBF-8560-2C809AE30E19
Έτος 2011
Τύπος Πλήρης Δημοσίευση σε Συνέδριο
Άδεια Χρήσης
Λεπτομέρειες
Βιβλιογραφική Αναφορά I. Rexakis and M. G. Lagoudakis, “Directed exploration of policy space using support vector classifiers," in 2011 IEEE Intern. Symp. Adapt. Dynam. Program. Reinfo. Learn. (ADPRL), pp.112 - 119. doi:10.1109/ADPRL.2011.5967389 https://doi.org/10.1109/ADPRL.2011.5967389
Εμφανίζεται στις Συλλογές

Περίληψη

Good policies in reinforcement learning problems typically exhibit significant structure. Several recent learning approaches based on the approximate policy iteration scheme suggest the use of classifiers for capturing this structure and representing policies compactly. Nevertheless, the space of possible policies, even under such structured representations, is huge and needs to be explored carefully to avoid computationally expensive simulations (rollouts) needed to probe the improved policy and obtain training samples at various points over the state space. Regarding rollouts as a scarce resource, we propose a method for directed exploration of policy space using support vector classifiers. We use a collection of binary support vector classifiers to represent policies, whereby each of these classifiers corresponds to a single action and captures the parts of the state space where this action dominates over the other actions. After an initial training phase with rollouts uniformly distributed over the entire state space, we use the support vectors of the classifiers to identify the critical parts of the state space with boundaries between different action choices in the represented policy. The policy is subsequently improved by probing the state space only at points around the support vectors that are distributed perpendicularly to the separating border. This directed focus on critical parts of the state space iteratively leads to the gradual refinement and improvement of the underlying policy and delivers excellent control policies in only a few iterations with a conservative use of rollouts. We demonstrate the proposed approach on three standard reinforcement learning domains: inverted pendulum, mountain car, and acrobot.

Υπηρεσίες

Στατιστικά