Skip to content

Train quadruped locomotion using reinforcement learning in Mujoco

Notifications You must be signed in to change notification settings

nimazareian/quadruped-rl-locomotion

Repository files navigation

Training Quadruped Locomotion using Reinforcement Learning in Mujoco

A custom gymnasium environment for training quadruped locomotion using reinforcement learning in the Mujoco simulator. The environment has been set up for the Unitree Go1 robot, however, it can be easily extended to train other robots as well.

There are two MJCF models provided for the Go1 robot. One tuned for position control with a proportional controller, and one model which directly takes in torque values for end-to-end training.

Trained Model with Motor Torque Actions

trained_go1_torque_ctrl.mp4

Trained Model with Position Actions and a Proportional Controller

trained_go1_position_ctrl.mp4

Setup

python -m pip install -r requirements.txt

Train

python train.py --run train

Displaying Trained Models

python train.py --run test --model_path <path to model zip file>

For example, to run a pretrained model which outputs motor torques and has the robot desired velocity set to <x=1, y=0>, you can run:

python train.py --run test --model_path .\models\2024-04-16_10-11-57-x=1_torque_ctrl_fixed_joint_range_5mill_iter_working_well\final_model.zip

Note

Some of the trained models in the models/ directory were trained on older version of the environment (different observation space), as a result, they may not work on the latest code on main. For the above example to work as expected, you could try running it from the following commit: 4756dee0ffe7b4e5e78a60195c47d5427998b2a6

Additional arguments for customizing training and testing
usage: train.py [-h] --run {train,test} [--run_name RUN_NAME] [--num_parallel_envs NUM_PARALLEL_ENVS]
                [--num_test_episodes NUM_TEST_EPISODES] [--record_test_episodes] [--total_timesteps TOTAL_TIMESTEPS]      
                [--eval_frequency EVAL_FREQUENCY] [--model_path MODEL_PATH] [--seed SEED]

optional arguments:
-h, --help            show this help message and exit
--run {train,test}
--run_name RUN_NAME   Custom name of the run. Note that all runs are saved in the 'models' directory and have the       
                        training time prefixed.
--num_parallel_envs NUM_PARALLEL_ENVS
                        Number of parallel environments while training
--num_test_episodes NUM_TEST_EPISODES
                        Number of episodes to test the model
--record_test_episodes
                        Whether to record the test episodes or not. If false, the episodes are rendered in the window.    
--total_timesteps TOTAL_TIMESTEPS
                        Number of timesteps to train the model for
--eval_frequency EVAL_FREQUENCY
                        The frequency of evaluating the models while training
--model_path MODEL_PATH
                        Path to the model (.zip)
--seed SEED

Note

This repository serves for education purposes and is by no mean finalized!