Build Unity environment binaries for SLM-Lab and release on npm for easy distribution.
To use a prebuilt environment, just add its npm package, e.g.
yarn add slm-lab-unitywalker-v0.
Building a binary requires 3 things:
- the Unity editor, installed via Unity Hub. Go to
Unity Hub > Installs > Editor > Add Modules > Linux Build Supportto enable Linux builds.
- ml-agents repo with the environment's Unity assets:
git clone https://github.com/Unity-Technologies/ml-agents.git
- this repo:
git clone https://github.com/kengz/SLM-Env.git
Build a Unity Environment binary
The goal is to build MacOSX and Ubuntu binaries that can be used in
ml-agents's gym API. Currently this also means restriction to using only non-vector environments.
In this example, we will use the Walker environment. We also recommend first going through the Unity Hub tutorial to get a basic knowledge about the editor.
ml-agents/UnitySDKfolder in the Unity editor.
In the Assets tab, find Walker under
ML-Agents > Examples > Walker > Scenes > Walker. Hit the play button to preview it.
Make any necessary asset changes:
to enable programmatic control, go to
controlin the Inspector tab.
open the asset
Walker > Brains > WalkerLearningand in the Inspector tab, change
Vector Observation > Stacked Vectorsto 1. Also, click on Model and delete it so we don't include the pretrained TF weights.
Window > Rendering > LightingSettingsand uncheck
Realtime Global Illuminationand
Baked Global Illumination. This is to prevent Enlighten from being used and spawning too many threads on Linux.
Now we're ready to build the binaries. Go to
File > Build Settings:
Add Open Scenesand add your scene
Player Settingsto show the Inspector tab. Check
Run in Background, set
Display Resolution Dialogto 'Disabled'. Optionally, set
Fullscreen Modeto 'Windowed'.
build one for Mac OS X. Hit
Build and Runto render immediately after building. Choose the directory
SLM-Env/build/and use the name
build one for Linux. Hit
Build, and use the same directory and name.
Test the binary. First ensure you have the
gym_unitypip packages installed from ml-agents. Use the following script to run an example control loop:
from gym_unity.envs import UnityEnv env = UnityEnv('/Users/YOURNAME/SLM-Env/build/UnityWalker-v0', 0, multiagent=True) state = env.reset() for i in range(500): action = env.action_space.sample() state, reward, done, info = env.step(action)
- git commit the binaries in
build/, then push it.
- clone this repo under SLM Lab:
git clone https://github.com/kengz/SLM-Env.git ./slm_lab/env/SLM-Env