Abstract
Computer generated forces (CGFs) inhabiting air combat training simulations must show realistic and adaptive behavior to effectively perform their roles as allies and adversaries. In earlier work, behavior for these CGFs was successfully generated using reinforcement learning. However, due to missile hits being subject to chance (a.k.a. the probability-of-kill), the CGFs have in certain cases been improperly rewarded and punished. We surmise that taking this probability-of-kill into account in the reward function will improve performance. To remedy the false rewards and punishments, a new reward function is proposed that rewards agents based on the expected outcome of their actions. Tests show that the use of this function significantly increases the performance of the CGFs in various scenarios, compared to the previous reward function and a naïve baseline. Based on the results, the new reward function allows the CGFs to generate more intelligent behavior, which enables better training simulations.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics |
Publisher | IEEE Press |
Publication status | Published - 2015 |
Event | IEEE International Conference on Systems, Man and Cybernetics 2015 - Hong Kong, China Duration: 9 Oct 2015 → 12 Oct 2015 |
Conference
Conference | IEEE International Conference on Systems, Man and Cybernetics 2015 |
---|---|
Country/Territory | China |
City | Hong Kong |
Period | 9/10/15 → 12/10/15 |