Wrapper for OpenAI Retro envs for parallel execution
OpenAI's Retro exposes an OpenAI gym interface for Deep Reinforcement Learning, but unfortunately, their back-end only allows one emulator instance per process. To get around this, I wrote this class.
To use it, just instantiate it like you would a normal retro environment, and then treat it exactly the same, but now you can have multiples in a single python process. Magic!
import retrowrapper if __name__ == "__main__": game = "SonicTheHedgehog-Genesis" state = "GreenHillZone.Act1" env1 = retrowrapper.RetroWrapper(game, state=state) env2 = retrowrapper.RetroWrapper(game, state=state) _obs = env1.reset() _obs = env2.reset() done = False while not done: action = env1.action_space.sample() _obs, _rew, done, _info = env1.step(action) env1.render() action = env2.action_space.sample() _obs, _rew, done, _info = env2.step(action) env2.render()
Using a custom make function
Sometimes you will need a custom make function, for example the
repository requires you to use their make function rather than
In these cases you can use the
retrowrapper.set_retro_make() to set a new
import retrowrapper from retro_contest.local import make retrowrapper.set_retro_make( make ) env1 = retrowrapper.RetroWrapper( game='SonicTheHedgehog2-Genesis', state='MetropolisZone.Act1' ) env2 = retrowrapper.RetroWrapper( game='SonicTheHedgehog2-Genesis', state='MetropolisZone.Act2' )