1 Star 0 Fork 0

shenxiaoming77/DRL-Pytorch

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
1.Q-learning
2.1_Duel-Double-DQN
2.2_Noisy-Duel-DDQN-Atari
2.3 Prioritized-Experience-Replay-DDQN-DQN
2.4_Categorical-DQN_C51
2.5_NoisyNet-DQN
3.1 PPO-Discrete
3.2 PPO-Continuous
4.1 DDPG
IMGs
model
runs
DDPG.py
LICENSE
Old-Version-with-gym==0.19.0.zip
README.md
main.py
utils.py
4.2 TD3
5.1 SAC-Discrete
5.2 SAC-Continuous
6. Actor-Sharer-Learner
README.md
RL_PYTORCH.png
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

DDPG-Pytorch

A clean Pytorch implementation of DDPG on continuous action space. Here is the result (all the experiments are trained with same hyperparameters):

Pendulum LunarLanderContinuous

Note that DDPG is notoriously susceptible to hyperparameters and thus is unstable sometimes. We strongly recommend you use its refinement TD3. Other RL algorithms by Pytorch can be found here.

Dependencies

gymnasium==0.29.1
numpy==1.26.1
pytorch==2.1.0

python==3.11.5

How to use my code

Train from scratch

python main.py

where the default enviroment is 'Pendulum'.

Play with trained model

python main.py --EnvIdex 0 --render True --Loadmodel True --ModelIdex 100

which will render the 'Pendulum'.

Change Enviroment

If you want to train on different enviroments, just run

python main.py --EnvIdex 1

The --EnvIdex can be set to be 0~5, where

'--EnvIdex 0' for 'Pendulum-v1'  
'--EnvIdex 1' for 'LunarLanderContinuous-v2'  
'--EnvIdex 2' for 'Humanoid-v4'  
'--EnvIdex 3' for 'HalfCheetah-v4'  
'--EnvIdex 4' for 'BipedalWalker-v3'  
'--EnvIdex 5' for 'BipedalWalkerHardcore-v3' 

Note: if you want train on BipedalWalker, BipedalWalkerHardcore, or LunarLanderContinuous, you need to install box2d-py first. You can install box2d-py via:

pip install gymnasium[box2d]

if you want train on Humanoid or HalfCheetah, you need to install MuJoCo first. You can install MuJoCo via:

pip install mujoco
pip install gymnasium[mujoco]

Visualize the training curve

You can use the tensorboard to record anv visualize the training curve.

  • Installation (please make sure PyTorch is installed already):
pip install tensorboard
pip install packaging
  • Record (the training curves will be saved at '\runs'):
python main.py --write True
  • Visualization:
tensorboard --logdir runs

Hyperparameter Setting

For more details of Hyperparameter Setting, please check 'main.py'

Reference

DDPG: Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning[J]. arXiv preprint arXiv:1509.02971, 2015.

马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/shenxiaoming77/DRL-Pytorch.git
git@gitee.com:shenxiaoming77/DRL-Pytorch.git
shenxiaoming77
DRL-Pytorch
DRL-Pytorch
main

搜索帮助