# DA-RNN **Repository Path**: mirrors_Zhenye-Na/DA-RNN ## Basic Information - **Project Name**: DA-RNN - **Description**: π ππππππππππ PyTorch Implementation of DA-RNN (arXiv:1704.02971) - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2022-01-07 - **Last Updated**: 2026-01-25 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # PyTorch Implementation of DA-RNN [](http://makeapullrequest.com) [](https://github.com/Zhenye-Na/DA-RNN/issues) [](http://hits.dwyl.io/Zhenye-Na/DA-RNN) [](https://colab.research.google.com/github/Zhenye-Na/DA-RNN/blob/master/src/da_rnn.ipynb.py) > *Get hands-on experience of implementation of RNN (LSTM) in Pytorch;* > *Get familiar with Finacial data with Deep Learning;*
## Table of Contents - [Dataset](#dataset) - [Download](#download) - [Description](#description) - [Usage](#usage) - [Train](#train) - [Result](#result) - [Training Loss](#training-loss) - [Prediction](#prediction) - [DA-RNN](#da-rnn) - [LSTM](#lstm) - [Attention Mechanism](#attention-mechanism) - [Model](#model) - [Experiments and Parameters Settings](#experiments-and-parameters-settings) - [NASDAQ 100 Stock dataset](#nasdaq-100-stock-dataset) - [Training procedure & Parameters Settings](#training-procedure--parameters-settings) - [References](#references) ## Dataset ### Download [NASDAQ 100 stock data](http://cseweb.ucsd.edu/~yaq007/NASDAQ100_stock_data.html) ### Description This dataset is a subset of the full `NASDAQ 100 stock dataset` used in [1]. It includes 105 days' stock data starting from July 26, 2016 to December 22, 2016. Each day contains 390 data points except for 210 data points on November 25 and 180 data points on Decmber 22. Some of the corporations under `NASDAQ 100` are not included in this dataset because they have too much missing data. There are in total 81 major coporations in this dataset and we interpolate the missing data with linear interpolation. In [1], the first 35,100 data points are used as the training set and the following 2,730 data points are used as the validation set. The last 2,730 data points are used as the test set. ## Usage ### Train ``` usage: main.py [-h] [--dataroot DATAROOT] [--batchsize BATCHSIZE] [--nhidden_encoder NHIDDEN_ENCODER] [--nhidden_decoder NHIDDEN_DECODER] [--ntimestep NTIMESTEP] [--epochs EPOCHS] [--lr LR] PyTorch implementation of paper 'A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction' optional arguments: -h, --help show this help message and exit --dataroot DATAROOT path to dataset --batchsize BATCHSIZE input batch size [128] --nhidden_encoder NHIDDEN_ENCODER size of hidden states for the encoder m [64, 128] --nhidden_decoder NHIDDEN_DECODER size of hidden states for the decoder p [64, 128] --ntimestep NTIMESTEP the number of time steps in the window T [10] --epochs EPOCHS number of epochs to train [10, 200, 500] --lr LR learning rate [0.001] reduced by 0.1 after each 10000 iterations ``` An example of training process is as follows: ``` python3 main --lr 0.0001 --epochs 50 ``` ## Result ### Training process |
|
|
|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
### Training Loss
|
|
|
|------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
### Prediction
|
|
|-----------------------------------------------------------------------------------------------------|
## DA-RNN
In the paper [*"A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction"*](https://arxiv.org/pdf/1704.02971.pdf).
They proposed a novel dual-stage attention-based recurrent neural network (DA-RNN) for time series prediction. In the first stage, an input attention mechanism is introduced to adaptively extract relevant driving series (a.k.a., input features) at each time step by referring to the previous encoder hidden state. In the second stage, a temporal attention mechanism is introduced to select relevant encoder hidden states across all time steps.
For the objective, a square loss is used. With these two attention mechanisms, the DA-RNN can adaptively select the most relevant input features and capture the long-term temporal dependencies of a time series. A graphical illustration of the proposed model is shown in Figure 1.
Figure 1: Graphical illustration of the dual-stage attention-based recurrent neural network.