1 Star 1 Fork 0

liuruoze / Thought-SC2

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MIT

Thought-SC2

Introduction

This is the code for Thought-Game and the model experiments (implement the WorldModel) on it.

Corresponding to the paper "Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning"

Also contains a script to train an agent against level-10 in "train_level10_eval_mini_srcgame_add_map_bn".

Contents

The below table shows the directories for the contents of the project. We provide codes for TG-SC2, TG-Zerg, TG-Terran, TG-SC1, TG-Hypothesis, and WorldModel-SC2.

Directory Content
All except below Codes for TG-SC2 and WorldModel-SC2
TG-Hypothesis Codes for different fidelity levels of TG
TG-SC1 Codes for running on SC1 with TorchCraft
TG-Zerg and TG-Terran Codes for Zerg race and Terran race

Requirements

  • python==3.5
  • tensorflow==1.5
  • future==0.16
  • pysc2==1.2
  • matplotlib==2.1
  • scipy==1.0

Notes:

If you install pysc2==1.2 and find this error "futures requires Python '>=2.6, <3' but the running Python is 3.5.6", then try first install futures as follow

pip install futures==3.1.1

then install pysc2==1.2, and this problem is solved.

Usage

Run eval_mini_srcgame.py to train an agent (P vs. T) in StarCraft II. See eval_mini_srcgame.py for more parameters.

Run testing

python eval_mini_srcgame.py --on_server=False 

Important Parameters

--training:         Whether to train an agent.
--restore_model:    Whether to restore old model.
--on_server:        If want to train on a server in distributed setting, set it to ture.
--map:              Name of a map to use. Default is Simple64.
--agent_race:       Agent's race. Default is P.
--bot_race:         Bot's race. Default is T.
--difficulty:       Bot's strength. Default is 7.
--port_num:         Port number for running SC2.
--max_iters:        How many iterations for training.
--step_mul:         Game speed. Set to 1 while testing. Set to 8 while training.
MIT License Copyright (c) 2018 Yusuke Nakata Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

简介

Code for paper "Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning". 展开 收起
Python
MIT
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Python
1
https://gitee.com/liuruoze/Thought-SC2.git
git@gitee.com:liuruoze/Thought-SC2.git
liuruoze
Thought-SC2
Thought-SC2
master

搜索帮助