Το work with title Deep reinforcement learning with implicit imitation for lane-free autonomous driving by Chrysomallis Iason, Troullinos Dimitrios, Chalkiadakis Georgios, Papamichail Ioannis, Papageorgiou Markos is licensed under Creative Commons Attribution-NoCommercial-NoDerivatives 4.0 International
Bibliographic Citation
I. Chrysomallis, D. Troullinos, G. Chalkiadakis, I. Papamichail and M. Papageorgiou, “Deep reinforcement learning with implicit imitation for lane-free autonomous driving,” in ECAI 2023 - Proc. of the 26th European Conference on Artificial Intelligence, vol. 372, Frontiers in Artificial Intelligence and Applications, K. Gal, A. Nowé, G. J. Nalepa, R. Fairstein, R. Rădulescu, Eds., Amsterdam, The Netherlands: IOS Press, 2023, pp. 461-468, doi: 10.3233/faia230304.
https://doi.org/10.3233/FAIA230304
Implicit imitation assumes that learning agents observe only the state transitions of an agent they use as a mentor, and try to recreate them based on their own abilities and knowledge of their environment. In this paper, we put forward a deep implicit imitation Q-network (DIIQN) model, which incorporates ideas from three well-known Deep Q-Network (DQN) variants. As such, we enable a novel implicit imitation method for online, model-free deep reinforcement learning. Our thorough experimentation in the complex environment of the emerging lane-free traffic paradigm, verifies the benefits of our approach. Specifically, we show that deep implicit imitation RL dramatically accelerates the learning process when compared to a “vanilla” DQN method; and, unlike explicit imitation reinforcement learning, it is able to outperform mentor performance without resorting to additional information, such as the mentor’s actions.