A Distributional Analogue to the Successor Representation

Harley Wiltzer Harley Wiltzer 2 3
Jesse Farebrother Jesse Farebrother 1 2 3
Arthur Gretton Arthur Gretton 1 4
Yunhao Tang Yunhao Tang 1
André Barreto André Barreto 1
Will Dabney Will Dabney 1
Marc G. Bellemare Marc G. Bellemare 2 3
Mark Rowland Mark Rowland 1

Abstract

This paper contributes a new approach for distributional reinforcement learning which elucidates a clean separation of transition structure and reward in the learning process. Analogous to how the successor representation (SR) describes the expected consequences of behaving according to a given policy, our distributional successor measure (SM) describes the distributional consequences of this behaviour. We formulate the distributional SM as a distribution over distributions and provide theory connecting it with distributional and model-based reinforcement learning. Moreover, we propose an algorithm that learns the distributional SM from data by minimizing a two-level maximum mean discrepancy. Key to our method are a number of algorithmic techniques that are independently valuable for learning generative models of state. As an illustration of the usefulness of the distributional SM, we show that it enables zero-shot risk-sensitive policy evaluation in a way that was not previously possible.



Citing

To cite this paper, please use the following reference:

@inproceedings{wiltzer24dsm,
	title        = {A Distributional Analogue to the Successor Representation},
	author       = {
		Wiltzer, Harley and Farebrother, Jesse and Gretton, Arthur and Tang, Yunhao
		and Barreto, André and Dabney, Will and Bellemare, Marc G. and Rowland, Mark
	},
	year         = 2024,
	booktitle    = {International Conference on Machine Learning (ICML)},
}