REVD
REVD(
observation_space: gym.Space, action_space: gym.Space, device: str = 'cpu',
beta: float = 0.05, kappa: float = 2.5e-05, latent_dim: int = 128, alpha: float = 0.5,
k: int = 5, average_divergence: bool = False
)
Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning (REVD). See paper: https://openreview.net/pdf?id=V2pw1VYMrDo
Args
- observation_space (Space) : The observation space of environment.
- action_space (Space) : The action space of environment.
- device (str) : Device (cpu, cuda, ...) on which the code should be run.
- beta (float) : The initial weighting coefficient of the intrinsic rewards.
- kappa (float) : The decay rate.
- latent_dim (int) : The dimension of encoding vectors.
- alpha (alpha) : The order of Rényi divergence.
- k (int) : Use the k-th neighbors.
- average_divergence (bool) : Use the average of divergence estimation.
Returns
Instance of REVD.
Methods:
.compute_irs
Compute the intrinsic rewards for current samples.
Args
- samples (Dict) : The collected samples. A python dict like
{obs (n_steps, n_envs, obs_shape)
, actions (n_steps, n_envs, action_shape), rewards (n_steps, n_envs) , next_obs (n_steps, n_envs, *obs_shape) }. - step (int) : The global training step.
Returns
The intrinsic rewards.
.add
Add new samples to the intrinsic reward module.
.update
Update the intrinsic reward module if necessary.
Args
- samples : The collected samples. A python dict like
{obs (n_steps, n_envs, obs_shape)
, actions (n_steps, n_envs, action_shape), rewards (n_steps, n_envs) , next_obs (n_steps, n_envs, *obs_shape) }.
Returns
None