FinRL is the open source library for practitioners. To efficiently automate trading, AI4Finance provides this educational resource and makes it easier to learn about deep reinforcement learning (DRL) in quantitative finance.
Reinforcement learning (RL) trains an agent how to solve tasks by trial and error, while DRL combines RL with deep learning. DRL balances exploration (of uncharted territory) and exploitation (of current knowledge), and has been recognized as an advantageous approach for automated trading. DRL framework is powerful in solving dynamic decision making problems by learning through interaction with an unknown environment, thus provides two major advantages: portfolio scala-bility and market model independence. In quantitative finance, automated trading is essentially making dynamic decisions, namely to decide where to trade, at what price, and what quantity, over a highlystochastic and complex stock market. Taking many complex financialfactors into account, DRL trading agents build a multi-factor model and provide algorithmic trading strategies, which are difficult for human traders
FinRL provides a framework that supports various markets, SOTA DRL algorithms, benchmarks of many quant finance tasks, live trading, etc.
To contribute? Please check the call for contributions at the end of this page.
Feel free to leave us feedback: report bugs using Github issues or discuss FinRL development in the slack channel.
We published the following papers and now arrive at this project:
4). FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, Deep RL Workshop, NeurIPS 2020.
3). Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy, paper and codes, ACM International Conference on AI in Finance, ICAIF 2020.
2). Multi-agent Reinforcement Learning for Liquidation Strategy Analysis, paper and codes. Workshop on Applications and Infrastructure for Multi-Agent Learning, ICML 2019.
1). Practical Deep Reinforcement Learning Approach for Stock Trading, paper and codes, Workshop on Challenges and Opportunities for AI in Financial Services, NeurIPS 2018.
As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging.
We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor’s degree of risk-aversion.
FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging work-loads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces.
Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation.
FinRL for Quantitative Finance: Tutorial for Single Stock Trading
FinRL for Quantitative Finance: Tutorial for Multiple Stock Trading
FinRL for Quantitative Finance: Tutorial for Portfolio Allocation
Clone this repository
git clone https://github.com/AI4Finance-LLC/FinRL-Library.git
Install the unstable development version of FinRL:
pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
For OpenAI Baselines, you'll need system packages CMake, OpenMPI and zlib. Those can be installed as follows
sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev libgl1-mesa-glx
Installation of system packages on Mac requires Homebrew. With Homebrew installed, run the following:
brew install cmake openmpi
To install stable-baselines on Windows, please look at the documentation.
cd into this repository
cd FinRL-Library
Under folder /FinRL-Library, create a Python virtual-environment
pip install virtualenv
Virtualenvs are essentially folders that have copies of python executable and all python packages.
Virtualenvs can also avoid packages conflicts.
Create a virtualenv venv under folder /FinRL-Library
virtualenv -p python3 venv
To activate a virtualenv:
source venv/bin/activate
The script has been tested running under Python >= 3.6.0, with the folowing packages installed:
pip install -r requirements.txt
Stable-Baselines3 is a set of improved implementations of reinforcement learning algorithms in PyTorch. It is the next major version of Stable Baselines. If you have questions regarding Stable-baselines package, please refer to Stable-baselines3 installation guide. Install the Stable Baselines package using pip:
pip install stable-baselines3[extra]
A migration guide from SB2 to SB3 can be found in the documentation.
Still Under Development
python main.py --mode=train
Use Quantopian's pyfolio package to do the backtesting.
The stock data we use is pulled from Yahoo Finance API
(The following time line is used in the paper; users can update to new time windows.)
@article{finrl2020,
author = {Liu, Xiao-Yang and Yang, Hongyang and Chen, Qian and Zhang, Runjia and Yang, Liuqing and Xiao, Bowen and Wang, Christina Dan},
journal = {Deep RL Workshop, NeurIPS 2020},
title = {{FinRL: A Deep Reinforcement Learning Library forAutomated Stock Trading in Quantitative Finance}},
url = {},
year = {2020}
}
We will maintain the open source FinRL library for the "AI + finance" community and welcome you to join as contributors!
We would like to support more asset markets, so that the users can test their stategies.
We will continue to maintian a pool of DRL algorithms that can be treated as SOTA implementations.
To help quants have better evaluations, here we maintain benchmarks for many trading tasks, upon which you can improve for your own tasks.
Supporting live trading can close the simulation-reality gap, it will enable quant to switch to the real market when they are confident with their strategies.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。