Skip to content

Commit da99706

Browse files
author
John Schulman
committed
ppo and trpo
1 parent 80f94f8 commit da99706

31 files changed

+2191
-74
lines changed

.gitignore

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,11 @@
77
# Setuptools distribution and build folders.
88
/dist/
99
/build
10+
keys/
1011

1112
# Virtualenv
1213
/env
1314

14-
# Python egg metadata, regenerated from source files by setuptools.
15-
/*.egg-info
1615

1716
*.sublime-project
1817
*.sublime-workspace
@@ -26,4 +25,8 @@ ghostdriver.log
2625

2726
htmlcov
2827

29-
junk
28+
junk
29+
src
30+
31+
*.egg-info
32+
.cache

README.md

Lines changed: 5 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
<img src="data/logo.jpg" width=25% align="right" />
22

3-
# Baselines
3+
# BASELINES
44

5-
We're releasing OpenAI Baselines, a set of high-quality implementations of reinforcement learning algorithms. To start, we're making available an open source version of Deep Q-Learning and three of its variants.
5+
We're releasing OpenAI Baselines, a set of high-quality implementations of reinforcement learning algorithms.
66

77
These algorithms will make it easier for the research community to replicate, refine, and identify new ideas, and will create good baselines to build research on top of. Our DQN implementation and its variants are roughly on par with the scores in published papers. We expect they will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones.
88

@@ -12,56 +12,6 @@ You can install it by typing:
1212
pip install baselines
1313
```
1414

15-
16-
## If you are curious.
17-
18-
##### Train a Cartpole agent and watch it play once it converges!
19-
20-
Here's a list of commands to run to quickly get a working example:
21-
22-
<img src="data/cartpole.gif" width="25%" />
23-
24-
25-
```bash
26-
# Train model and save the results to cartpole_model.pkl
27-
python -m baselines.deepq.experiments.train_cartpole
28-
# Load the model saved in cartpole_model.pkl and visualize the learned policy
29-
python -m baselines.deepq.experiments.enjoy_cartpole
30-
```
31-
32-
33-
Be sure to check out the source code of [both](baselines/deepq/experiments/train_cartpole.py) [files](baselines/deepq/experiments/enjoy_cartpole.py)!
34-
35-
## If you wish to apply DQN to solve a problem.
36-
37-
Check out our simple agent trained with one stop shop `deepq.learn` function.
38-
39-
- `baselines/deepq/experiments/train_cartpole.py` - train a Cartpole agent.
40-
- `baselines/deepq/experiments/train_pong.py` - train a Pong agent using convolutional neural networks.
41-
42-
In particular notice that once `deepq.learn` finishes training it returns `act` function which can be used to select actions in the environment. Once trained you can easily save it and load at later time. For both of the files listed above there are complimentary files `enjoy_cartpole.py` and `enjoy_pong.py` respectively, that load and visualize the learned policy.
43-
44-
## If you wish to experiment with the algorithm
45-
46-
##### Check out the examples
47-
48-
49-
- `baselines/deepq/experiments/custom_cartpole.py` - Cartpole training with more fine grained control over the internals of DQN algorithm.
50-
- `baselines/deepq/experiments/atari/train.py` - more robust setup for training at scale.
51-
52-
53-
##### Download a pretrained Atari agent
54-
55-
For some research projects it is sometimes useful to have an already trained agent handy. There's a variety of models to choose from. You can list them all by running:
56-
57-
```bash
58-
python -m baselines.deepq.experiments.atari.download_model
59-
```
60-
61-
Once you pick a model, you can download it and visualize the learned policy. Be sure to pass `--dueling` flag to visualization script when using dueling models.
62-
63-
```bash
64-
python -m baselines.deepq.experiments.atari.download_model --blob model-atari-duel-pong-1 --model-dir /tmp/models
65-
python -m baselines.deepq.experiments.atari.enjoy --model-dir /tmp/models/model-atari-duel-pong-1 --env Pong --dueling
66-
67-
```
15+
- [DQN](baselines/deepq)
16+
- [PPO](baselines/pposgd)
17+
- [TRPO](baselines/trpo_mpi)

baselines/bench/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from baselines.bench.benchmarks import *
2+
from baselines.bench.monitor import *
3+

baselines/bench/benchmarks.py

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
_atari7 = ['BeamRider', 'Breakout', 'Enduro', 'Pong', 'Qbert', 'Seaquest', 'SpaceInvaders']
2+
_atariexpl7 = ['Freeway', 'Gravitar', 'MontezumaRevenge', 'Pitfall', 'PrivateEye', 'Solaris', 'Venture']
3+
4+
_BENCHMARKS = []
5+
6+
def register_benchmark(benchmark):
7+
for b in _BENCHMARKS:
8+
if b['name'] == benchmark['name']:
9+
raise ValueError('Benchmark with name %s already registered!'%b['name'])
10+
_BENCHMARKS.append(benchmark)
11+
12+
def list_benchmarks():
13+
return [b['name'] for b in _BENCHMARKS]
14+
15+
def get_benchmark(benchmark_name):
16+
for b in _BENCHMARKS:
17+
if b['name'] == benchmark_name:
18+
return b
19+
raise ValueError('%s not found! Known benchmarks: %s' % (benchmark_name, list_benchmarks()))
20+
21+
def get_task(benchmark, env_id):
22+
"""Get a task by env_id. Return None if the benchmark doesn't have the env"""
23+
return next(filter(lambda task: task['env_id'] == env_id, benchmark['tasks']), None)
24+
25+
_ATARI_SUFFIX = 'NoFrameskip-v4'
26+
27+
register_benchmark({
28+
'name' : 'Atari200M',
29+
'description' :'7 Atari games from Mnih et al. (2013), with pixel observations, 200M frames',
30+
'tasks' : [{'env_id' : _game + _ATARI_SUFFIX, 'trials' : 2, 'num_timesteps' : int(200e6)} for _game in _atari7]
31+
})
32+
33+
register_benchmark({
34+
'name' : 'Atari40M',
35+
'description' :'7 Atari games from Mnih et al. (2013), with pixel observations, 40M frames',
36+
'tasks' : [{'env_id' : _game + _ATARI_SUFFIX, 'trials' : 2, 'num_timesteps' : int(40e6)} for _game in _atari7]
37+
})
38+
39+
register_benchmark({
40+
'name' : 'Atari1Hr',
41+
'description' :'7 Atari games from Mnih et al. (2013), with pixel observations, 1 hour of walltime',
42+
'tasks' : [{'env_id' : _game + _ATARI_SUFFIX, 'trials' : 2, 'num_seconds' : 60*60} for _game in _atari7]
43+
})
44+
45+
register_benchmark({
46+
'name' : 'AtariExploration40M',
47+
'description' :'7 Atari games emphasizing exploration, with pixel observations, 40M frames',
48+
'tasks' : [{'env_id' : _game + _ATARI_SUFFIX, 'trials' : 2, 'num_timesteps' : int(40e6)} for _game in _atariexpl7]
49+
})
50+
51+
52+
_mujocosmall = [
53+
'InvertedDoublePendulum-v1', 'InvertedPendulum-v1',
54+
'HalfCheetah-v1', 'Hopper-v1', 'Walker2d-v1',
55+
'Reacher-v1', 'Swimmer-v1']
56+
57+
register_benchmark({
58+
'name' : 'Mujoco1M',
59+
'description' : 'Some small 2D MuJoCo tasks, run for 1M timesteps',
60+
'tasks' : [{'env_id' : _envid, 'trials' : 3, 'num_timesteps' : int(1e6)} for _envid in _mujocosmall]
61+
})
62+
63+
_roboschool_mujoco = [
64+
'RoboschoolInvertedDoublePendulum-v0', 'RoboschoolInvertedPendulum-v0', # cartpole
65+
'RoboschoolHalfCheetah-v0', 'RoboschoolHopper-v0', 'RoboschoolWalker2d-v0', # forward walkers
66+
'RoboschoolReacher-v0'
67+
]
68+
69+
register_benchmark({
70+
'name' : 'RoboschoolMujoco2M',
71+
'description' : 'Same small 2D tasks, still improving up to 2M',
72+
'tasks' : [{'env_id' : _envid, 'trials' : 3, 'num_timesteps' : int(2e6)} for _envid in _roboschool_mujoco]
73+
})
74+
75+
76+
_atari50 = [ # actually 49
77+
'Alien', 'Amidar', 'Assault', 'Asterix', 'Asteroids',
78+
'Atlantis', 'BankHeist', 'BattleZone', 'BeamRider', 'Bowling',
79+
'Boxing', 'Breakout', 'Centipede', 'ChopperCommand', 'CrazyClimber',
80+
'DemonAttack', 'DoubleDunk', 'Enduro', 'FishingDerby', 'Freeway',
81+
'Frostbite', 'Gopher', 'Gravitar', 'IceHockey', 'Jamesbond',
82+
'Kangaroo', 'Krull', 'KungFuMaster', 'MontezumaRevenge', 'MsPacman',
83+
'NameThisGame', 'Pitfall', 'Pong', 'PrivateEye', 'Qbert',
84+
'Riverraid', 'RoadRunner', 'Robotank', 'Seaquest', 'SpaceInvaders',
85+
'StarGunner', 'Tennis', 'TimePilot', 'Tutankham', 'UpNDown',
86+
'Venture', 'VideoPinball', 'WizardOfWor', 'Zaxxon',
87+
]
88+
89+
register_benchmark({
90+
'name' : 'Atari50_40M',
91+
'description' :'7 Atari games from Mnih et al. (2013), with pixel observations, 40M frames',
92+
'tasks' : [{'env_id' : _game + _ATARI_SUFFIX, 'trials' : 3, 'num_timesteps' : int(40e6)} for _game in _atari50]
93+
})

baselines/bench/monitor.py

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
__all__ = ['Monitor', 'get_monitor_files', 'load_results']
2+
3+
import gym
4+
from gym.core import Wrapper
5+
from os import path
6+
import time
7+
from glob import glob
8+
9+
try:
10+
import ujson as json # Not necessary for monitor writing, but very useful for monitor loading
11+
except ImportError:
12+
import json
13+
14+
class Monitor(Wrapper):
15+
EXT = "monitor.json"
16+
f = None
17+
18+
def __init__(self, env, filename, allow_early_resets=False):
19+
Wrapper.__init__(self, env=env)
20+
self.tstart = time.time()
21+
if filename is None:
22+
self.f = None
23+
self.logger = None
24+
else:
25+
if not filename.endswith(Monitor.EXT):
26+
filename = filename + "." + Monitor.EXT
27+
self.f = open(filename, "wt")
28+
self.logger = JSONLogger(self.f)
29+
self.logger.writekvs({"t_start": self.tstart, "gym_version": gym.__version__,
30+
"env_id": env.spec.id if env.spec else 'Unknown'})
31+
self.allow_early_resets = allow_early_resets
32+
self.rewards = None
33+
self.needs_reset = True
34+
self.episode_rewards = []
35+
self.episode_lengths = []
36+
self.total_steps = 0
37+
self.current_metadata = {} # extra info that gets injected into each log entry
38+
# Useful for metalearning where we're modifying the environment externally
39+
# But want our logs to know about these modifications
40+
41+
def __getstate__(self): # XXX
42+
d = self.__dict__.copy()
43+
if self.f:
44+
del d['f'], d['logger']
45+
d['_filename'] = self.f.name
46+
d['_num_episodes'] = len(self.episode_rewards)
47+
else:
48+
d['_filename'] = None
49+
return d
50+
def __setstate__(self, d):
51+
filename = d.pop('_filename')
52+
self.__dict__ = d
53+
if filename is not None:
54+
nlines = d.pop('_num_episodes') + 1
55+
self.f = open(filename, "r+t")
56+
for _ in range(nlines):
57+
self.f.readline()
58+
self.f.truncate()
59+
self.logger = JSONLogger(self.f)
60+
61+
62+
def reset(self):
63+
if not self.allow_early_resets and not self.needs_reset:
64+
raise RuntimeError("Tried to reset an environment before done. If you want to allow early resets, wrap your env with Monitor(env, path, allow_early_resets=True)")
65+
self.rewards = []
66+
self.needs_reset = False
67+
return self.env.reset()
68+
69+
def step(self, action):
70+
if self.needs_reset:
71+
raise RuntimeError("Tried to step environment that needs reset")
72+
ob, rew, done, info = self.env.step(action)
73+
self.rewards.append(rew)
74+
if done:
75+
self.needs_reset = True
76+
eprew = sum(self.rewards)
77+
eplen = len(self.rewards)
78+
epinfo = {"r": eprew, "l": eplen, "t": round(time.time() - self.tstart, 6)}
79+
epinfo.update(self.current_metadata)
80+
if self.logger:
81+
self.logger.writekvs(epinfo)
82+
self.episode_rewards.append(eprew)
83+
self.episode_lengths.append(eplen)
84+
info['episode'] = epinfo
85+
self.total_steps += 1
86+
return (ob, rew, done, info)
87+
88+
def close(self):
89+
if self.f is not None:
90+
self.f.close()
91+
92+
def get_total_steps(self):
93+
return self.total_steps
94+
95+
def get_episode_rewards(self):
96+
return self.episode_rewards
97+
98+
def get_episode_lengths(self):
99+
return self.episode_lengths
100+
101+
class JSONLogger(object):
102+
def __init__(self, file):
103+
self.file = file
104+
105+
def writekvs(self, kvs):
106+
for k,v in kvs.items():
107+
if hasattr(v, 'dtype'):
108+
v = v.tolist()
109+
kvs[k] = float(v)
110+
self.file.write(json.dumps(kvs) + '\n')
111+
self.file.flush()
112+
113+
114+
class LoadMonitorResultsError(Exception):
115+
pass
116+
117+
def get_monitor_files(dir):
118+
return glob(path.join(dir, "*" + Monitor.EXT))
119+
120+
def load_results(dir):
121+
fnames = get_monitor_files(dir)
122+
if not fnames:
123+
raise LoadMonitorResultsError("no monitor files of the form *%s found in %s" % (Monitor.EXT, dir))
124+
episodes = []
125+
headers = []
126+
for fname in fnames:
127+
with open(fname, 'rt') as fh:
128+
lines = fh.readlines()
129+
header = json.loads(lines[0])
130+
headers.append(header)
131+
for line in lines[1:]:
132+
episode = json.loads(line)
133+
episode['abstime'] = header['t_start'] + episode['t']
134+
del episode['t']
135+
episodes.append(episode)
136+
header0 = headers[0]
137+
for header in headers[1:]:
138+
assert header['env_id'] == header0['env_id'], "mixing data from two envs"
139+
episodes = sorted(episodes, key=lambda e: e['abstime'])
140+
return {
141+
'env_info': {'env_id': header0['env_id'], 'gym_version': header0['gym_version']},
142+
'episode_end_times': [e['abstime'] for e in episodes],
143+
'episode_lengths': [e['l'] for e in episodes],
144+
'episode_rewards': [e['r'] for e in episodes],
145+
'initial_reset_time': min([min(header['t_start'] for header in headers)])
146+
}

baselines/common/__init__.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
2-
3-
1+
from baselines.common.console_util import *
2+
from baselines.common.dataset import Dataset
3+
from baselines.common.math_util import *
44
from baselines.common.misc_util import *

0 commit comments

Comments
 (0)