Abstract

Many computer games feature non-player charactert (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to explain its actions and motivations. When NPC behavior is controlled by a black box AI system it can be hard to generate the necessary explanations. In this paper, we present a system that generates human-like, natural language explanations—called rationales—of an agent's actions in a game environment regardless of how the decisions are made by a black box AI. We outline a robust data collection and neural network training pipeline that can be used to gather think-aloud data and train a rationale generation model for any similar sequential turn based decision making task. A human-subject study shows that our technique produces believable rationales for an agent playing the game, Frogger. We conclude with insights about how people perceive automatically generated rationales.

Document Type

Article

Publication Date

2018

Notes/Citation Information

Published in Joint Proceedings of the AIIDE 2018 Workshops, v. 2282, The 5th Experimental AI in Games Workshop (EXAG), paper 122.

Copyright © 2018 for the individual papers by the papers' authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors.

The copyright holders have granted the permission for posting the article here.

Share

COinS