URI | http://purl.tuc.gr/dl/dias/65B6F1D7-C3C5-4D8C-9AAE-863D5E1AC00D | - |
Identifier | https://doi.org/10.3390/systems11030134 | - |
Identifier | https://www.mdpi.com/2079-8954/11/3/134 | - |
Language | en | - |
Extent | 28 pages | en |
Title | Deep reinforcement learning reward function design for autonomous driving in lane-free traffic | en |
Creator | Karalakou Athanasia | en |
Creator | Καραλακου Αθανασια | el |
Creator | Troullinos Dimitrios | en |
Creator | Τρουλλινος Δημητριος | el |
Creator | Chalkiadakis Georgios | en |
Creator | Χαλκιαδακης Γεωργιος | el |
Creator | Papageorgiou Markos | en |
Creator | Παπαγεωργιου Μαρκος | el |
Publisher | MDPI | en |
Description | The research leading to these results has received funding from the European Research Council under the European Union’s Horizon 2020 Research and Innovation programme/ ERC Grant Agreement n. [833915], project TrafficFluid. | en |
Content Summary | Lane-free traffic is a novel research domain, in which vehicles no longer adhere to the notion of lanes, and consider the whole lateral space within the road boundaries. This constitutes an entirely different problem domain for autonomous driving compared to lane-based traffic, as there is no leader vehicle or lane-changing operation. Therefore, the observations of the vehicles need to properly accommodate the lane-free environment without carrying over bias from lane-based approaches. The recent successes of deep reinforcement learning (DRL) for lane-based approaches, along with emerging work for lane-free traffic environments, render DRL for lane-free traffic an interesting endeavor to investigate. In this paper, we provide an extensive look at the DRL formulation, focusing on the reward function of a lane-free autonomous driving agent. Our main interest is designing an effective reward function, as the reward model is crucial in determining the overall efficiency of the resulting policy. Specifically, we construct different components of reward functions tied to the environment at various levels of information. Then, we combine and collate the aforementioned components, and focus on attaining a reward function that results in a policy that manages to both reduce the collisions among vehicles and address their requirement of maintaining a desired speed. Additionally, we employ two popular DRL algorithms—namely, deep Q-networks (enhanced with some commonly used extensions), and deep deterministic policy gradient (DDPG), which results in better policies. Our experiments provide a thorough investigative study on the effectiveness of different combinations among the various reward components we propose, and confirm that our DRL-employing autonomous vehicle is able to gradually learn effective policies in environments with varying levels of difficulty, especially when all of the proposed rewards components are properly combined. | en |
Type of Item | Peer-Reviewed Journal Publication | en |
Type of Item | Δημοσίευση σε Περιοδικό με Κριτές | el |
License | http://creativecommons.org/licenses/by/4.0/ | en |
Date of Item | 2024-06-28 | - |
Date of Publication | 2023 | - |
Subject | Deep reinforcement learning | en |
Subject | Lane-free traffic | en |
Subject | Autonomous driving | en |
Bibliographic Citation | A. Karalakou, D. Troullinos, G. Chalkiadakis and M. Papageorgiou, “Deep reinforcement learning reward function design for autonomous driving in lane-free traffic,” Systems, vol. 11, no. 3, Mar. 2023, doi: 10.3390/systems11030134. | en |