Atari
Scores and learning cures of various RL algorithms on the full Atari benchmark. Environment link: https://github.com/Farama-Foundation/Arcade-Learning-Environment Number of environments: 57 Number of training steps: 10,000,000 Number of seeds: 10 Added algorithms: [PPO]
Methods:
.load_scores
Returns final performance.
Args
- env_id (str) : Environment ID.
- agent_id (str) : Agent name.
Returns
Test scores data array with shape (N_SEEDS, N_POINTS).
.load_curves
Returns learning curves using a Dict
of NumPy arrays.
Args
- env_id (str) : Environment ID.
- agent_id (str) : Agent name.
Returns
- train : np.ndarray(shape=(N_SEEDS, N_POINTS))
- eval : np.ndarray(shape=(N_SEEDS, N_POINTS)) Learning curves data with structure: curves
.load_models
Load the model from the hub.
Args
- env_id (str) : Environment ID.
- agent (str) : Agent name.
- seed (int) : The seed to load.
- device (str) : The device to load the model on.
Returns
The loaded model.
.load_apis
Load the a training API.
Args
- env_id (str) : Environment ID.
- agent (str) : Agent name.
- seed (int) : The seed to load.
- device (str) : The device to load the model on.
Returns
The loaded API.