Skip to content

[Question] Using images to train DDPG+HER agent #287

@SilviaZirino

Description

@SilviaZirino

Hello,
I’m trying to train a DDPG+HER agent that interacts with a custom environment and that takes in input the RGB image of the environment.
From what I understand, in the previous version of stable baselines only 1D observation spaces were supported in HER (as indicated also in HERGoalEnvWrapper), thus excluding image observations

       if len(goal_space_shape) == 2:
            assert goal_space_shape[1] == 1, "Only 1D observation spaces are supported yet"
        else:
            assert len(goal_space_shape) == 1, "Only 1D observation spaces are supported yet"

In this new version of stable baselines, I see no evident assertion against using 2D spaces but in ObsDictWrapper, the observation and goal dimensions are taken from the first dimension shape only

    if isinstance(self.spaces[0], spaces.Discrete):
        self.obs_dim = 1
        self.goal_dim = 1
    else:
        self.obs_dim = venv.observation_space.spaces["observation"].shape[0]
        self.goal_dim = venv.observation_space.spaces["achieved_goal"].shape[0]

Question

Is it possible to train a DDPG+HER agent from images using the implementation of stable baselines 3?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestquestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions