Welcome to the webpage for the ICLR 2020 SARFA Saliency paper.

Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution

ArXiv PDF OpenReview Code+Data

As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches.

Citation

@inproceedings{salrl:iclr20,
  author = {Piyush Gupta and Nikaash Puri and Sukriti Verma and Dhruv Kayastha and Shripad Deshmukh and Balaji Krishnamurthy and Sameer Singh},
  title = {Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year = {2020}
}

More Details

subscribe via RSS