1 Star 0 Fork 1

zhou_leo/PPO-Pyorch

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README

PPO-PyTorch

This repository provides a Minimal PyTorch implementation of Proximal Policy Optimization (PPO) with clipped objective for OpenAI gym environments. It is primarily intended for beginners in Reinforcement Learning for understanding the PPO algorithm. It can still be used for complex environments but may require some hyperparameter-tuning or changes in the code.

Modified from https://github.com/tangyudi/Ai-Learn

Usage

  • To train a new network : run PPO_continuous.py
  • To train a new network : run PPO.py
  • To train a test network : run test_continuous.py
  • To train a test network : run test.py

Dependencies

Trained and tested on:

gym==0.19.0 
pyglet==1.5.27  
box2d box2d-kengz 
gym[box2d]
torch==2.0.1+cu117

If you still have problems, you can check requirement.txt.

References

空文件

简介

This repository provides a Minimal PyTorch implementation of Proximal Policy Optimization (PPO) with clipped objective for OpenAI gym environments. 展开 收起
Python
取消

发行版

暂无发行版

贡献者

全部

近期动态

不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/zhou-leo/ppo-pyorch.git
git@gitee.com:zhou-leo/ppo-pyorch.git
zhou-leo
ppo-pyorch
PPO-Pyorch
master

搜索帮助