neroRL

A library for Deep Reinforcement Learning (PPO) in PyTorch


Keywords
Deep, Reinforcement, Learning, PyTorch, Proximal, Policy, Optimization, PPO, Recurrent, Recurrence, LSTM, GRU
License
MIT
Install
pip install neroRL==0.0.1

Documentation

neroRL

neroRL is a PyTorch based framework for Deep Reinforcement Learning, which I'm currently developing while pursuing my PhD in this academic field. Its focus is set on environments that are procedurally generated, while providing some usefull tools for experimenting and analyzing a trained behavior. One core feature encompasses recurrent policies

Features

Obstacle Tower Challenge

Originally, this work started out by achieving the 7th place during the Obstacle Tower Challenge by using a relatively simple FFCNN. This video presents some footage of the approach and the trained behavior:

Rising to the Obstacle Tower Challenge

Recently we published a paper at CoG 2020 (best paper candidate) that analyzes the taken approach. Additionally the model was trained on 3 level designs and was evaluated on the two left out ones.

Getting Started

To get started check out the docs!