This is an implementation of the AlphaZero algorithm for playing the simple board game Gomoku (also called Gobang or Five in a Row) from pure self-play training. The game Gomoku is much simpler than Go or chess, so that we can focus on the training scheme of AlphaZero and obtain a pretty good AI model on a single PC in a few hours.
References:
To play with the trained AI models, only need:
To train the AI model from scratch, further need, either:
PS: if your Theano's version > 0.7, please follow this issue to install Lasagne,
otherwise, force pip to downgrade Theano to 0.7 pip install --upgrade theano==0.7.0
If you would like to train the model using other DL frameworks, you only need to rewrite policy_value_net.py.
To play with provided models, run the following script from the directory:
python human_play.py
You may modify human_play.py to try different provided models or the pure MCTS.
To train the AI model from scratch, with Theano and Lasagne, directly run:
python train.py
With PyTorch or TensorFlow, first modify the file train.py, i.e., comment the line
from policy_value_net import PolicyValueNet # Theano and Lasagne
and uncomment the line
# from policy_value_net_pytorch import PolicyValueNet # Pytorch
or
# from policy_value_net_tensorflow import PolicyValueNet # Tensorflow
and then execute: python train.py
(To use GPU in PyTorch, set use_gpu=True
and use return loss.item(), entropy.item()
in function train_step in policy_value_net_pytorch.py if your pytorch version is greater than 0.5)
The models (best_policy.model and current_policy.model) will be saved every a few updates (default 50).
Note: the 4 provided models were trained using Theano/Lasagne, to use them with PyTorch, please refer to issue 5.
Tips for training:
My article describing some details about the implementation in Chinese: https://zhuanlan.zhihu.com/p/32089487
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。