-
Notifications
You must be signed in to change notification settings - Fork 2k
Closed
Labels
enhancementNew feature or requestNew feature or requestquestionFurther information is requestedFurther information is requested
Description
Hello,
I’m trying to train a DDPG+HER agent that interacts with a custom environment and that takes in input the RGB image of the environment.
From what I understand, in the previous version of stable baselines only 1D observation spaces were supported in HER (as indicated also in HERGoalEnvWrapper), thus excluding image observations
if len(goal_space_shape) == 2:
assert goal_space_shape[1] == 1, "Only 1D observation spaces are supported yet"
else:
assert len(goal_space_shape) == 1, "Only 1D observation spaces are supported yet"
In this new version of stable baselines, I see no evident assertion against using 2D spaces but in ObsDictWrapper, the observation and goal dimensions are taken from the first dimension shape only
if isinstance(self.spaces[0], spaces.Discrete):
self.obs_dim = 1
self.goal_dim = 1
else:
self.obs_dim = venv.observation_space.spaces["observation"].shape[0]
self.goal_dim = venv.observation_space.spaces["achieved_goal"].shape[0]
Question
Is it possible to train a DDPG+HER agent from images using the implementation of stable baselines 3?
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestquestionFurther information is requestedFurther information is requested