Encoder-Decoder Model for Multistep Time Series Forecasting Using PyTorch

Gautham Kumaran
Towards Data Science
8 min readJun 8, 2020

--

Encoder-decoder models have provided state of the art results in sequence to sequence NLP tasks like language translation, etc. Multistep time-series forecasting can also be treated as a seq2seq task, for which the encoder-decoder model can be used. This article provides an encoder-decoder model to solve a time series forecasting task from Kaggle along with the steps involved in getting a top 10% result.

The solution code can be found in my Github repo. The model implementation is inspired by Pytorch seq2seq translation tutorial and the time-series forecasting ideas were mainly from a Kaggle winning solution of a similar competition.

The dataset used is from a past Kaggle competition — Store Item demand forecasting challenge, given the past 5 years of sales data (from 2013 to 2017) of 50 items from 10 different stores, predict the sale of each item in the next 3 months (01/01/2018 to 31/03/2018). This is a multi-step multi-site time series forecasting problem.

Kaggle Competition

The features provided are quite minimal:

There are 500 unique store-item combinations, meaning that we are forecasting 500 time-series.

Sales plot of 10 items chosen at random

Data Preprocessing

Feature Engineering

Deep learning models are good at uncovering features on its own, so feature engineering can be kept to a minimum.

From the plot, it can be seen that our data has weekly and monthly seasonality and yearly trend, to capture these, DateTime features are provided to the model. In order to capture the yearly trend of each item’s sale better, yearly autocorrelation is also provided.

Many of these features are cyclical in nature, in order to provide this information to the model, sine and cosine transformations are applied to the DateTime features. A detailed explanation of why this is beneficial can be found here — Encoding cyclical continuous features — 24-hour time

sine and cosine transformation of month feature

So the final set of features is as given below.

Data Scaling

Neural networks expect the value of all features to be on the same scale, therefore data scaling becomes mandatory. The values of each time-series are normalized independently. Yearly autocorrelation and year are also normalized.

Sequence Building

The encoder-decoder model takes a sequence as input and returns a sequence as output, therefore the flat dataframe we have must be converted into sequences.

The length of the output sequence is fixed as 90 days, to match our problem requirement. The length of the input sequence must be selected based on the problem complexity, and the computing resources available. For this problem, an input sequence length of 180 (6 months) is chosen. The sequence data is built by applying a sliding window to each time-series in the dataset.

Dataset and Dataloader

Pytorch provides convenient abstractions — Dataset and Dataloader — to feed data into the model. The Dataset takes the sequence data as input and is responsible for constructing each datapoint to be fed to the model. It also handles the processing of different types of features fed to the model, this part will be explained in detail below.

The data points from the Dataset are batched together and fed to the model using the dataloader.

Model Architecture

An encoder-decoder model is a form of Recurrent neural network(RNN) used to solve sequence to sequence problems. The encoder-decoder model can be intuitively understood as follows.

The encoder-decoder model consists of two networks — Encoder and Decoder. The encoder network learns(encodes) a representation of the input sequence that captures its characteristics or context, and gives out a vector. This vector is known as the context vector. The decoder network receives the context vector and learns to read and extract(decodes) the output sequence from it.

In both Encoder and Decoder, the task of encoding and decoding the sequence is handled by a series of Recurrent cells. The recurrent cell used in the solution is a Gated Recurrent Unit (GRU), to get around the short memory problem. More information on this can be found in Illustrated Guide to LSTM’s and GRU’s.

The detailed architecture of the model used in the solution is given below.

Encoder

The input to the encoder network is of the shape (sequence length, n_values), therefore each item in the sequence is made of n values. In constructing these values, different types of features are treated differently.

Time dependant features — These are the features that vary with time, such as sales, and DateTime features. In the encoder, each sequential time dependant value is fed into an RNN cell.

Numerical features — Static features that do not vary with time, such as the yearly autocorrelation of the series. These features are repeated across the length of the sequence and are fed into the RNN. The process of repeating in and merging the values are handled in the Dataset.

Categorical features — Features such as store id and item id, can be handled in multiple ways, the implementation of each method can be found in encoders.py. For the final model, the categorical variables were one-hot encoded, repeated across the sequence, and are fed into the RNN, this is also handled in the Dataset.

The input sequence with these features is fed into the recurrent network — GRU. The code of the encoder network used is given below.

Decoder

The decoder receives the context vector from the encoder, in addition, inputs to the decoder are the future DateTime features and lag features. The lag feature used in the model was the previous year's value. The intuition behind using lag features is, given that the input sequence is limited to 180 days, providing important data points from beyond this timeframe will help the model.

Unlike the encoder in which a recurrent network(GRU) is used directly, the decoder is built be looping through a decoder cell. This is because the forecast obtained from each decoder cell is passed as an input to the next decoder cell. Each decoder cell is made of a GRUCell whose output is fed into a fully connected layer which provides the forecast. The forecast from each decoder cell is combined to form the output sequence.

Encoder-Decoder Model

The Encoder-decoder model is built by wrapping the encoder and decoder cell into a Module that handles the communication between the two.

Model Training

The performance of the model highly depends on the training decisions taken around optimization, learning rate schedule, etc. I’ll briefly cover each of them.

  1. Validation Strategy — The cross-sectional train-validation-test split does not work since our data is time dependant. A time-dependant train-validation-test split poses a problem, which is that the model is not trained on the recent validation data, which affects the performance of the model in test data.
    In order to combat this, a model is trained on 3 years of past data, from 2014 to 2016, and predicts the first 3 months of 2017, which is used for validation and experimentation. The final model is trained on data from 2014 to 2017 data and predicts the first 3 months of 2018. The final model is trained in blind mode without validation, based on learnings from the validation model training.
  2. Optimizer — The optimizer used is AdamW, which has provided state of the result in many learning tasks. A more detailed analysis of AdamW can be found in Fastai. Another optimizer explored is the COCOBOptimizer, which does not set the learning rate explicitly. On training with COCOBOptimizer, I observed that it converged faster than the AdamW, especially in the initial iterations. But the best result was obtained from using AdamW, with One Cycle Learning.
  3. Learning Rate Scheduling1cycle learning rate scheduler was used. The maximum learning rate in the cycle was determined by using the learning rate finder for cyclic learning. The implementation of the learning rate finder used is from the library — pytorch-lr-finder.
  4. The loss function used was Mean squared error loss, which is different from the completion loss — SMAPE. MSE loss provided a more stable convergence, that using SMAPE.
  5. Separate optimizer and scheduler pairs were used for the encoder and decoder network, which gave an improvement in result.
  6. In addition to weight decay, dropout was used in both encoder and decoder to combat overfitting.
  7. A wrapper was built to handle the training process with the capability to handle multiple optimizers and schedulers, checkpointing, and Tensorboard integration. The code for this can be found in trainer.py.

Results

The following plot shows the forecast made by the model for the first 3 months of 2018, for a single item from a store.

The model can be better evaluated by plotting the mean sales of all items, and the mean forecast to remove the noise. The following plot is from the forecast of the validation model for a particular date, therefore the forecast can be compared with the actual sales data.

The result from the encoder-decoder model would have provided a top 10% rank in the competition’s leaderboard.

I did minimal hyperparameter tuning for achieving this result, so there is more scope for improvement. Further improvements to the model can also be made by exploring attention mechanisms, to further boost the memory of the model.

Thanks for reading, let me know your thoughts. Have a good day! 😄

--

--