In 2021, the non-profit organization Farama Foundation took over Gym. They introduced new features into Gym and renamed the library to Gymnasium where the future maintenance of OpenAI Gym will take place.
Let's check it out!
Gym
OpenAI Gym is an open source Python library for developing reinforcement learning algorithms by providing standard API to communicate between algorithms and environments. Gym soon became widely adopted by the community for creating and training algorithms in various environments for AI Reinforcement Learning.
Gym provides a wide range of environments, including classic control problems, Atari games, and robotics. Gym also includes set of tools for monitoring and evaluating performance of reinforcement learning agents.
Hello Gym
Creating a Gym environment e.g. Cart Pole. Launch PyCharm | New Project | Name: HelloCartPoleGym [~/]
source .venv/bin/activate pip install --upgrade pip |
pip install gym pip install gym[classic_control] |
CODE
import gym env = gym.make("CartPole-v1", render_mode="human") observation, info = env.reset(seed=42) for _ in range(1000): action = env.action_space.sample() observation, reward, terminated, truncated, info = env.step(action) if terminated or truncated: observation, info = env.reset() env.close() |
Gymnasium
The main problem with Gym was the lack of maintenance: OpenAI did not allocate substantial resources for the development of Gym since its inception seven years earlier and, by 2020, it simply was not maintained.
Gymnasium is a maintained fork of the OpenAI Gym library. The Gymnasium interface is simple, pythonic, and capable of representing general RL problems with a compatibility wrapper for old Gym environments:
Hello Gymnasium
Creating /Gymnasium environment e.g. Cart Pole. Launch PyCharm | New Project | HelloCartPoleGymnasium
source .venv/bin/activate pip install --upgrade pip |
pip install gymnasium pip install gymnasium[classic-control] |
CODE
import gymnasium as gym env = gym.make("CartPole-v1", render_mode="human") observation, info = env.reset(seed=42) for _ in range(1000): action = env.action_space.sample() observation, reward, terminated, truncated, info = env.step(action) if terminated or truncated: observation, info = env.reset() env.close() |
Differences
While both Gym and Gymnasium are powerful tools for creating environments for reinforcement learning, it makes more sense to prefer Gymnasium going forward as technically future support for Gym there is none. Note: old import gym documentation is replaced with import gymnasium as gym convention going forward.
Farama Examples
Complete all Farama examples! Create OpenAI Test repo. Launch PyCharm | New Project | FarmaCheatSheet
IMPORTANT
If Python Interpreter not set and/or Virtual Environment not available then choose File | Settings... | Project: Python Interpreter | Add Interpreter | Add Local Interpreter. Launch terminal: source .venv/bin/activate.
Copy the official Gymnasium requirements.txt from the Farama Foundation GitHub repository docs directory:
pip install -r requirements.txt |
pip install --upgrade pip |
Classic Control
There are five Classic Control environments which are stochastic in terms of initial state within given range.
pip install gymnasium | |||
Acrobot-v1 | CartPole-v1 | MountainCar-v0 | Pendulum-v1 |
Toy Text
Toy Text environments are designed to be extremely simple, with small discrete state and action spaces, and hence easy to learn. They are suitable for debugging implementations of reinforcement learning algorithms.
pip install gymnasium | |||
Blackjack-v1 | CliffWalking-v0 | FrozenLake-v1 | Taxi-v3 |
Box2D
Box2D environments involve toy games based around box2d physics control and PyGame-based rendering.
pip install gymnasium | pip install Box2D | ||
BipedalWalker-v3 | CarRacing-v2 | LunarLander-v2 |
MuJoCo
MuJoCo [Multi-Joint dynamics with Contact] is physics engine for facilitating research and development in robotics, biomechanics, graphics and animation, also areas where fast and accurate simulation is needed.
pip install gymnasium | pip install mujoco==2.3.0 | ||
Ant-v4 | Humanoid-v4 | InvertedPendulum-v4 | Swimmer-v4 |
HalfCheetah-v4 | HumanoidStandup-v4 | Pusher-v4 | Walker2d-v4 |
Hopper-v4 | InvertedDoublePendulum-v4 | Reacher-v4 |
Atari
Atari environments are simulated via the Arcade Learning Environment (ALE) through the Stella emulator.
Pull Request
Finally, submit Pull Request: upload Farma examples. Navigate to source Gymnasium repo | Create new fork
Git clone the forked Gymnasium source code repository and setup the following Git username and useremail
git config --global --get user.name git config --global --get user.email |
git config --global user.name "SteveProXNA" git config --global user.email "steven_boland@hotmail.com" |
Next, configure the Gymnasium upstream destination code repository in order to complete the Pull Request:
git remote -v |
git remote add upstream git@github.com:Farama-Foundation/Gymnasium.git |
origin origin |
git@github.com:SteveProXNA/Gymnasium.git (fetch) git@github.com:SteveProXNA/Gymnasium.git (push) |
upstream upstream |
git@github.com:Farama-Foundation/Gymnasium.git (fetch) git@github.com:Farama-Foundation/Gymnasium.git (push) |
Launch PyCharm | Open Gymnasium | Create virtual environment | Install requirements as pre-requisites:
pip install -r docs/requirements.txt |
pip install --upgrade pip |
Uncomment game from ClassicControl sub-directory files.txt or ToyText files.txt. Open main.py. Run script:
Install Box2D and MuJoCo dependencies. Uncomment game from Box2D files.txt or MuJoCo. Run main.py:
Box2D pip install Box2D |
MuJoCo pip install mujoco==2.3.0 |
Finally, in order to integrate Atari game examples I found this only worked cut from tag v0.29.1 [81b87ef]:
git fetch --tags --prune --prune-tags git fetch --all |
git tag | grep v0.29.1 git checkout tags/v0.29.1 |
pip install gymnasium[atari] | pip install gymnasium[accept-rom-license] |
Retrospectives
In Jan-2020 Retrospective XI, we learned about an interesting development connecting Artificial Intelligence and retro video games leveraging Reinforcement Learning. In Jan-2021 Retrospective XII, we then discussed video games as an interesting field of study and retro game development of Pacman and Doom clones as an excellent avenue to pursue with regards to future applications in software with Artificial Intelligence and RL.
Consequently, in the previous post, we disovered how to debug step thru cloned versions of Doom to better understand the original source code: an important pre-requisite to extend knowledge for further research in reinforcement learning, in particular. For example, ViZDoom allows developing AI bots that play Doom using visual data. In 2022, ViZDoom joined Farama Foundation featuring new Gymnasium environment wrappers.
Summary
To summarize, we have now setup Gymnasium environments for Classic Control, Toy Text, Box 2D, MuJo Co and Atari. NB: all Atari environments leverage the Atari Arcade Learning Environment [ALE] emulator Stella.
In fact, reverting back to OpenAI which hosts the Retro Git source code repository, also including Emulated Systems like Atari. However, Retro also includes many other Emulated Systems we have developed for in the past including Nintendo GameBoy / GameBoy Color, Sega MegaDrive and or course the Sega Master System! This will be the topic of the next post.
Subscribe to:
Post Comments (RSS)
No comments:
Post a Comment