Το έργο με τίτλο Sequentially optimal repeated coalition formation under uncertainty από τον/τους δημιουργό/ούς Craig Boutilier διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού 4.0 Διεθνές
Βιβλιογραφική Αναφορά
G.Chalkiadakis, C. Boutilier , "Sequentially optimal repeated coalition formation under uncertainty",Auto.s Ag. and Mul.-Ag. Syst.,vol.24,no.3 ,pp.441-484 ,2012.doi: 10.1007/s10458-010-9157-y
https://doi.org/ 10.1007/s10458-010-9157-y
Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed (and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming “new” coalitions to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known), using value of information to define optimal exploration policies. Our framework effectively integrates decision making during repeated coalition formation under type uncer- tainty with Bayesian reinforcement learning techniques. Specifically, we present several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability of our model to facilitate knowledge transfer across different dynamic tasks.