Skip to content

Commit b212793

Browse files
megan-klaiberaraffin
authored andcommitted
Implement HER (DLR-RM#120)
* Added working her version, Online sampling is missing. * Updated test_her. * Added first version of online her sampling. Still problems with tensor dimensions. * Reformat * Fixed tests * Added some comments. * Updated changelog. * Add missing init file * Fixed some small bugs. * Reduced arguments for HER, small changes. * Added getattr. Fixed bug for online sampling. * Updated save/load funtions. Small changes. * Added her to init. * Updated save method. * Updated her ratio. * Move obs_wrapper * Added DQN test. * Fix potential bug * Offline and online her share same sample_goal function. * Changed lists into arrays. * Updated her test. * Fix online sampling * Fixed action bug. Updated time limit for episodes. * Updated convert_dict method to take keys as arguments. * Renamed obs dict wrapper. * Seed bit flipping env * Remove get_episode_dict * Add fast online sampling version * Added documentation. * Vectorized reward computation * Vectorized goal sampling * Update time limit for episodes in online her sampling. * Fix max episode length inference * Bug fix for Fetch envs * Fix for HER + gSDE * Reformat (new black version) * Added info dict to compute new reward. Check her_replay_buffer again. * Fix info buffer * Updated done flag. * Fixes for gSDE * Offline her version uses now HerReplayBuffer as episode storage. * Fix num_timesteps computation * Fix get torch params * Vectorized version for offline sampling. * Modified offline her sampling to use sample method of her_replay_buffer * Updated HER tests. * Updated documentation * Cleanup docstrings * Updated to review comments * Fix pytype * Update according to review comments. * Removed random goal strategy. Updated sample transitions. * Updated migration. Removed time signal removal. * Update doc * Fix potential load issue * Add VecNormalize support for dict obs * Updated saving/loading replay buffer for HER. * Fix test memory usage * Fixed save/load replay buffer. * Fixed save/load replay buffer * Fixed transition index after loading replay buffer in online sampling * Better error handling * Add tests for get_time_limit * More tests for VecNormalize with dict obs * Update doc * Improve HER description * Add test for sde support * Add comments * Add comments * Remove check that was always valid * Fix for terminal observation * Updated buffer size in offline version and reset of HER buffer * Reformat * Update doc * Remove np.empty + add doc * Fix loading * Updated loading replay buffer * Separate online and offline sampling + bug fixes * Update tensorboard log name * Version bump * Bug fix for special case Co-authored-by: Antonin Raffin <[email protected]> Co-authored-by: Antonin RAFFIN <[email protected]>
1 parent bfdfce5 commit b212793

File tree

2 files changed

+635
-0
lines changed

2 files changed

+635
-0
lines changed
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
from typing import Dict
2+
3+
import numpy as np
4+
from gym import spaces
5+
6+
from stable_baselines3.common.vec_env import VecEnv, VecEnvWrapper
7+
8+
9+
class ObsDictWrapper(VecEnvWrapper):
10+
"""
11+
Wrapper for a VecEnv which overrides the observation space for Hindsight Experience Replay to support dict observations.
12+
13+
:param env: The vectorized environment to wrap.
14+
"""
15+
16+
def __init__(self, venv: VecEnv):
17+
super(ObsDictWrapper, self).__init__(venv, venv.observation_space, venv.action_space)
18+
19+
self.venv = venv
20+
21+
self.spaces = list(venv.observation_space.spaces.values())
22+
23+
# get dimensions of observation and goal
24+
if isinstance(self.spaces[0], spaces.Discrete):
25+
self.obs_dim = 1
26+
self.goal_dim = 1
27+
else:
28+
self.obs_dim = venv.observation_space.spaces["observation"].shape[0]
29+
self.goal_dim = venv.observation_space.spaces["achieved_goal"].shape[0]
30+
31+
# new observation space with concatenated observation and (desired) goal
32+
# for the different types of spaces
33+
if isinstance(self.spaces[0], spaces.Box):
34+
low_values = np.concatenate(
35+
[venv.observation_space.spaces["observation"].low, venv.observation_space.spaces["desired_goal"].low]
36+
)
37+
high_values = np.concatenate(
38+
[venv.observation_space.spaces["observation"].high, venv.observation_space.spaces["desired_goal"].high]
39+
)
40+
self.observation_space = spaces.Box(low_values, high_values, dtype=np.float32)
41+
elif isinstance(self.spaces[0], spaces.MultiBinary):
42+
total_dim = self.obs_dim + self.goal_dim
43+
self.observation_space = spaces.MultiBinary(total_dim)
44+
elif isinstance(self.spaces[0], spaces.Discrete):
45+
dimensions = [venv.observation_space.spaces["observation"].n, venv.observation_space.spaces["desired_goal"].n]
46+
self.observation_space = spaces.MultiDiscrete(dimensions)
47+
else:
48+
raise NotImplementedError(f"{type(self.spaces[0])} space is not supported")
49+
50+
def reset(self):
51+
return self.venv.reset()
52+
53+
def step_wait(self):
54+
return self.venv.step_wait()
55+
56+
@staticmethod
57+
def convert_dict(
58+
observation_dict: Dict[str, np.ndarray], observation_key: str = "observation", goal_key: str = "desired_goal"
59+
) -> np.ndarray:
60+
"""
61+
Concatenate observation and (desired) goal of observation dict.
62+
63+
:param observation_dict: Dictionary with observation.
64+
:param observation_key: Key of observation in dicitonary.
65+
:param goal_key: Key of (desired) goal in dicitonary.
66+
:return: Concatenated observation.
67+
"""
68+
return np.concatenate([observation_dict[observation_key], observation_dict[goal_key]], axis=-1)

0 commit comments

Comments
 (0)