Time series with ARIMA and RNN models
1 Introduction
The classical methods for predicting univariate time series are ARIMA models (under linearity assumption and provided that the non stationarity is of type DS) that use the autocorrelation function (up to some order) to predict the target variable based on its own past values (Autoregressive part) and the past values of the errors (moving average part) in a linear function . However, the hardest step in ARIMA models is to derive stationary series from non stationary series that exhibits less well defined trend (deterministic or stochastic) or seasonality. The RNN model, proposed by John Hopfield (1982), is a deep learning model that does not need the above requirements (the type of non stationarity and linearity) and can capture and model the memory of the time series, which is the main characteristic of some type of sequence data, in addition to time series, such as text data, image captioning, speech recognition .. etc.
The basic idea behind RNN is very simple (As described in the plot below). At each time step t the model compute a state value \(h_t\) that combines (in linear combination) the previous state \(h_{t-1}\) (which contains all the memory available at time t-1 ) and the current input \(x_t\) (which is the current value of the time series), passing then the result to the activation function tanh (to capture any nonlinearity relations). The state at each time step t thus can formally be expressed as follows:
\[h_t=tanh(W_h.h_{t-1}+W_x.x_t+b)\]
And then we leave the work to the gradient descent to decide how much memory we keep by computing the optimum weights \(W_h\). Similarely, the output \[y_t\] will be computed by the following: \[y_t=W_y.h_t\]
img <- EBImage::readImage("C://Users/dell/Documents/new-blog/content/courses/rnn/rnn_plot.jpg")
plot(img)
2 Data preparation
First let’s call the packages needed for our analysis
ssh <- suppressPackageStartupMessages
ssh(library(timeSeries))
ssh(library(tseries))
ssh(library(aTSA))
ssh(library(forecast))
## Warning: package 'forecast' was built under R version 4.0.2
ssh(library(rugarch))
## Warning: package 'rugarch' was built under R version 4.0.2
ssh(library(ModelMetrics))
ssh(library(keras))
In this article we will use the data USDCHF from the timeSeries package which is the univariate series of the intraday foreign exchange rates between US dollar and Swiss franc with 62496 observations.
data(USDCHF)
length(USDCHF)
## [1] 62496
Let’s look at this data by the following plot after converting it to ts object.
data(USDCHF)
data <- ts(USDCHF, frequency = 365)
plot(data)
This series seems to have a trend and it is not stationary, but let’s verify this by the dickey fuller and philip perron tests
adf.test(data)
pp.test(data )
Both tests confirm that the data has unit roots(high p-value: we do not reject the null hypothesis). We can also check the correlogram of the autocorrelation function acf and the Partial autocorrelation function pacf as follows:
acf(data)
pacf(data)
As you know the ACF is related to the MA part and PACF to the AR part, so since in the pacf we have one bar that exceeds far away the confidence interval we are confident that our data has unit root and we can get ride of it by differencing the data by one. In ARIMA terms the data should be integrated by 1 (d=1), and this the I part of arima. In addition, since we do not have a decay of bars in PACF, the model would not have any lag included in the AR part. Whereas, from the ACF plot, all the bars are highly far from the confidence interval then the model would have many lags of MA part.
3 ARIMA model
To fit an ARIMA model we have to determine the lag of the AR (p) and MA(q) components and how many times we integrate the series to be stationary (d). Fortunately, we do not have to worry about these issues, we leave everything to the forcast package that provides a fast way to get the best model by calling the function auto.arima. But before that let’s held out the last 100 observations to be used as testing data in order to compare the quality of this model and the RNN model.
data_test <- data[(length(data)-99):length(data)]
data_train <- data[1:(length(data)-99-1)]
model_arima <- auto.arima(data_train)
summary(model_arima)
## Series: data_train
## ARIMA(0,1,2) with drift
##
## Coefficients:
## ma1 ma2 drift
## -0.0193 0.0113 0
## s.e. 0.0040 0.0040 0
##
## sigma^2 estimated as 2.29e-06: log likelihood=316634.5
## AIC=-633260.9 AICc=-633260.9 BIC=-633224.8
##
## Training set error measures:
## ME RMSE MAE MPE MAPE
## Training set 1.900607e-08 0.001513064 0.0009922846 -3.671242e-05 0.06627114
## MASE ACF1
## Training set 0.999585 -3.921999e-05
As expected this model is an ARIMA(0,1,2) integrated by 1 (differenced series is now stationary) and has two MA lags without drift (constant). The output also has some metric values like Root mean square error RMSE and mean absolute error MAE which are the most popular ones. we will use later on this metric to compare this model with the RNN model. To validate this model we have to make sure that the residuals are white noise without any problems such as autocorrelation or heterskedasticity. Thankfully to forecast package we can check the residual straightforwardly by calling the function checkresiduals
checkresiduals(model_arima)
##
## Ljung-Box test
##
## data: Residuals from ARIMA(0,1,2) with drift
## Q* = 8.6631, df = 7, p-value = 0.2778
##
## Model df: 3. Total lags used: 10
Since the p-value is far larger than the significance level 5% we do not reject the null hypothesis that the errors are not autocorrelated. However, by looking at the ACF plot we have some bars that go outside the confidence interval, but this can be expected by the significance level of 5% (as false positive). So we can confirm the non correlation with 95% of confidence. For possible heteroskedasticity we use ARCH_LM statistic from the package aTSA package.
arch.test(arima(data_train, order = c(0,1,2)))
We see that both test are highly significant (we reject the null hypothesis of homoskedasticity), so the above arima model is not able to capture such pattern. That is why we should join to the above model another model that keeps track of this type of patterns which is called GARCH model. The garch model attempts to model the residuals of the ARIMA model with the general following formula: \[\epsilon_t=w_t\sqrt{h_t}\] \[h_t=w_t\sqrt{a_0+\sum_{i=1}^{p}a_i.\epsilon_{t-i}^2+\sum_{j=1}^{q}b_j.h_{t-j}}\]
Where \(w_t\) is white noise error.
So we fit this model for different lags by calling the function garch from the package tseries, and we use the AIC criterion to get the best model.
model <- character()
AIC <- numeric()
for (p in 1:5){
for(q in 1:5){
model_g <- tseries::garch(model_arima$residuals, order = c(p,q), trace=F)
model<-c(model,paste("mod_", p, q))
AIC <- c(AIC, AIC(model_g))
def <- tibble::tibble(model,AIC)
}
}
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
## Warning in tseries::garch(model_arima$residuals, order = c(p, q), trace = F):
## singular information
def %>% dplyr::arrange(AIC)
## # A tibble: 25 x 2
## model AIC
## <chr> <dbl>
## 1 mod_ 1 1 -647018.
## 2 mod_ 2 1 -647005.
## 3 mod_ 1 2 -647005.
## 4 mod_ 2 3 -646986.
## 5 mod_ 1 3 -646971.
## 6 mod_ 1 4 -646967.
## 7 mod_ 2 2 -646900.
## 8 mod_ 3 3 -646885.
## 9 mod_ 3 1 -646859.
## 10 mod_ 1 5 -646859.
## # ... with 15 more rows
As we see the simpler model with one lag for each component fit well the residuals we can check the residuals of this model with box test.
model_garch <- tseries::garch(model_arima$residuals, order = c(1,1), trace=F)
## Warning in tseries::garch(model_arima$residuals, order = c(1, 1), trace = F):
## singular information
Box.test(model_garch$residuals)
##
## Box-Pierce test
##
## data: model_garch$residuals
## X-squared = 3.1269, df = 1, p-value = 0.07701
With significance level of 5% we do not reject the null hypothesis of independence. As an alternative we can inspect the acf of the residuals.
acf(model_garch$residuals[-1])
The easiest way to get prediction from our model is by making use of the rugarch package. First, we specify the model with the parameters obtained above (the different lags)
# garch1 <- ugarchspec(mean.model = list(armaOrder = c(0,2), include.mean = FALSE),
# variance.model = list(garchOrder = c(1,1)))
Then we use the function ugarchfit to predict our data_train. However, you might noticed that we supplied only the lags of the AR and MA parts of our ARIMA model (the d value for integration is not available in this function), so we should provide the differenced series of data_train instead of the original series.
Ddata_train <- diff(data_train)
# garchfit <- ugarchfit(data=Ddata_train, spec = garch1, solver = "gosolnp",trace=F)
# coef(garchfit)
Our final model will be written as follows.
\[y_t=e_t-4.296.10^{-2}e_{t-1}+5.687.10^{-3}e_{t-2} \\ e_t\sim N(0,\hat\sigma_t^2) \\ \hat\sigma_t^2=1.950.10^{-7}+2.565.10^{-1}e_{t-1}^2+6.940.10^{-1}\hat\sigma_{t-1}^2\]
NOTE: when running the above model we get different results due to the internal randomization process, that is why i commented the above code to prevent it to be rerun again when rendering this document.
Now we use this model for forecasting 100 future values to be compared then with the data_test values.
# fitted <- ugarchforecast(garchfit, n.ahead=100)
#yh_test<-numeric()
#for (i in 2:100){
# yh_test[1] <- data_train[length(data_train)]+fitted(fitted)[1]
# yh_test[i] <- yh_test[i-1]+fitted(fitted)[i]
#}
#df_eval <- tibble::tibble(y_test=data_test, yh_test=yh_test)
#df_eval
Finally we should save the df_eval table with the original and the fitted values of the data_test for further use.
#write.csv(df_eval, "df_eval.csv")
4 RNN model
As an alternative to ARIMA prediction method discussed above, the deep learning RNN method can also take into account the memory of the time series. Unlike the classical feedforward networks that process each single input independently, the RNN takes a bunch of inputs that supposed to be in one sequence and process them together as showed in the first plot. In keras this step can be achieved by layer_simple_rnn (Chollet, 2017, p167]. This means we have to decide the length of the sequence, in other words how far back we think that the current value is depending on (the memory of the time series). In our case we think that 7 days values should be satisfactory to predict the current value.
4.0.1 Reshape the time series
The first thing we do is organizing the data in such way that the model knows what part is considered as sequences to be processed by the rnn layer, and what part is the target variable. To do so we reorganize the time series into a matrix where each row is a single input , and the columns contain the lagged values (of the target variable) up to 7 and the target variable in the last column. Consequently, The total number of rows will be the length(data)-maxlen-1, where maxlen refers to the length of each sequences (constant) which here is equal to 7.
Let’s first create an empty matrix
maxlen <- 7
exch_matrix<- matrix(0, nrow = length(data_train)-maxlen-1, ncol = maxlen+1)
Now let’s move our time series to this matrix and display some rows to be sure that the output is as expected to be.
for(i in 1:(length(data_train)-maxlen-1)){
exch_matrix[i,] <- data_train[i:(i+maxlen)]
}
head(exch_matrix)
## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
## [1,] 1.1930 1.1941 1.1933 1.1931 1.1924 1.1926 1.1926 1.1932
## [2,] 1.1941 1.1933 1.1931 1.1924 1.1926 1.1926 1.1932 1.1933
## [3,] 1.1933 1.1931 1.1924 1.1926 1.1926 1.1932 1.1933 1.1932
## [4,] 1.1931 1.1924 1.1926 1.1926 1.1932 1.1933 1.1932 1.1933
## [5,] 1.1924 1.1926 1.1926 1.1932 1.1933 1.1932 1.1933 1.1934
## [6,] 1.1926 1.1926 1.1932 1.1933 1.1932 1.1933 1.1934 1.1940
Now we separate the inputs from the target.
x_train <- exch_matrix[, -ncol(exch_matrix)]
y_train <- exch_matrix[, ncol(exch_matrix)]
The rnn layer in keras expects the inputs to be of the shape (examples, maxlen, number of features), since then we have only one feature (our single time series that is processed sequentially) the shape of the inputs should be c(examples, 7,1). However, the first dimension can be discarded and we can provide only the last ones.
dim(x_train)
## [1] 62388 7
As we see this shape does not include the number of features, so we can correct it as follows.
x_train <- array_reshape(x_train, dim = c((length(data_train)-maxlen-1), maxlen, 1))
dim(x_train)
## [1] 62388 7 1
4.1 Model architecture
When it comes to deep learning models, there is a large space for hyperparameters to be defined and the results are heavily depending on these hyperparameters, such as the optimal number of layers, the optimal number of nodes in each layer, the suitable activation function, the suitable loss function, the best optimizer, the best regularization techniques, the best random initialization , …etc. Unfortunately, we do not have yet an exact rule to decide about these hyperparameters, and they depend on the problem under study, the data at hand, and the experience of the modeler. In our case, for instance, our data is very simple, and, actually does not require complex architecture, we will thus use only one hidden rnn layer with 10 nodes, the loss function will be the mean square error mse , the optimizer will be adam, and the metric will be the mean absolute error mae.
Note : with large and complex time series it might be needed to stack many rnn layers.
model <- keras_model_sequential()
model %>%
layer_dense(input_shape = dim(x_train)[-1], units=maxlen) %>%
layer_simple_rnn(units=10) %>%
layer_dense(units = 1)
summary(model)
## Model: "sequential"
## ________________________________________________________________________________
## Layer (type) Output Shape Param #
## ================================================================================
## dense (Dense) (None, 7, 7) 14
## ________________________________________________________________________________
## simple_rnn (SimpleRNN) (None, 10) 180
## ________________________________________________________________________________
## dense_1 (Dense) (None, 1) 11
## ================================================================================
## Total params: 205
## Trainable params: 205
## Non-trainable params: 0
## ________________________________________________________________________________
4.2 Model training
Now let’s compile and run the model with 5 epochs, batch_size of 32 instances at a time to update the weights, and to keep track of the model performance we held out 10% of the training data as validation set.
model %>% compile(
loss = "mse",
optimizer= "adam",
metric = "mae"
)
#history <- model %>%
# fit(x_train, y_train, epochs = 5, batch_size = 32, validation_split=0.1)
since each time we rerun the model we will get different results, so we should save the model (or only the model weights) and reload it again, doing so when rendering the document we will not be surprised by other outputs.
#save_model_hdf5(model, "rnn_model.h5")
rnn_model <- load_model_hdf5("rnn_model.h5")
4.3 Prediction
In order to get the prediction of the last 100 data point, we will predict the entire data then we compute the rmse for the last 100 predictions.
maxlen <- 7
exch_matrix2<- matrix(0, nrow = length(data)-maxlen-1, ncol = maxlen+1)
for(i in 1:(length(data)-maxlen-1)){
exch_matrix2[i,] <- data[i:(i+maxlen)]
}
x_train2 <- exch_matrix2[, -ncol(exch_matrix2)]
y_train2 <- exch_matrix2[, ncol(exch_matrix2)]
x_train2 <- array_reshape(x_train2, dim = c((length(data)-maxlen-1), maxlen, 1))
pred <- rnn_model %>% predict(x_train2)
df_eval_rnn <- tibble::tibble(y_rnn=y_train2[(length(y_train2)-99):length(y_train2)],
yhat_rnn=as.vector(pred)[(length(y_train2)-99):length(y_train2)])
5 results comparison
we can now compare the prediction of the last 100 data points using this model with the predicted values for the same data points using the ARIMA model. We first load the above data predicted with ARIMA model and join every thing in one data frame, then we use two metrics to compare, rmse, mae which are easily available in ModelMetrics package.
Note: You might want to ask why we only use 100 data points for predictions where usually, in machine learning, we use a large number sometimes 20% of the entire data. The answer is because of the nature of the ARIMA models which are a short term prediction models, especially with financial data that are characterized by the high and instable volatility (that is why we use garch model above).
df_eval <- read.csv("df_eval.csv")
rmse <- c(rmse(df_eval$y_test, df_eval$yh_test),
rmse(df_eval_rnn$y_rnn, df_eval_rnn$yhat_rnn) )
mae <- c(mae(df_eval$y_test, df_eval$yh_test),
mae(df_eval_rnn$y_rnn, df_eval_rnn$yhat_rnn) )
df <- tibble::tibble(model=c("ARIMA", "RNN"), rmse, mae)
df
## # A tibble: 2 x 3
## model rmse mae
## <chr> <dbl> <dbl>
## 1 ARIMA 0.00563 0.00388
## 2 RNN 0.00442 0.00401
As we see, The two models are closer to each other. However, if we use the rmse, which is the popular metrics used with continuous variables the rnn model is better, but with mae they are approximately the same.
6 Conclusion
Even though this data is very simple and does not need an RNN model, and it can be predicted with the classical ARIMA models, but it is used here for pedagogic purposes to well understand how the RNN works, and how the data should be processed to be ingested by keras. However, rnn model suffers from a major problem when running a large sequence known as Vanishing gradient and exploding gradient. In other words, with the former, when using the chain rule to compute the gradients, if the derivatives have small values then multiplying a large number of small values (as the length of the sequence) yields very tiny values that cause the network to be slowly trainable or even untrainable. The opposite is true when we face the latter problem, in this case we will get very large values and the network never converges.
Soon I will post an article with multivariate time series by implementing Long Short term memory LSTM model that is supposed to overcome the above problems that faces simple rnn model .
7 Further reading
- Froncois Chollet, Deep learning with R, Meap edition, 2017, p167
- Ian Godfollow et al, Deep Learning, http://www.deeplearningbook.org/
8 Session info
sessionInfo()
## R version 4.0.1 (2020-06-06)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19041)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=English_United States.1252
## [2] LC_CTYPE=English_United States.1252
## [3] LC_MONETARY=English_United States.1252
## [4] LC_NUMERIC=C
## [5] LC_TIME=English_United States.1252
##
## attached base packages:
## [1] parallel stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] keras_2.3.0.0 ModelMetrics_1.2.2.2 rugarch_1.4-4
## [4] forecast_8.13 aTSA_3.1.2 tseries_0.10-47
## [7] timeSeries_3062.100 timeDate_3043.102
##
## loaded via a namespace (and not attached):
## [1] jsonlite_1.7.1 assertthat_0.2.1
## [3] TTR_0.24.2 tiff_0.1-5
## [5] yaml_2.2.1 GeneralizedHyperbolic_0.8-4
## [7] numDeriv_2016.8-1.1 pillar_1.4.6
## [9] lattice_0.20-41 reticulate_1.16
## [11] glue_1.4.2 quadprog_1.5-8
## [13] DistributionUtils_0.6-0 digest_0.6.25
## [15] colorspace_1.4-1 htmltools_0.5.0
## [17] Matrix_1.2-18 pkgconfig_2.0.3
## [19] bookdown_0.20 purrr_0.3.4
## [21] fftwtools_0.9-9 mvtnorm_1.1-1
## [23] scales_1.1.1 whisker_0.4
## [25] jpeg_0.1-8.1 tibble_3.0.3
## [27] farver_2.0.3 EBImage_4.30.0
## [29] generics_0.0.2 ggplot2_3.3.2
## [31] ellipsis_0.3.1 urca_1.3-0
## [33] nnet_7.3-14 BiocGenerics_0.34.0
## [35] cli_2.0.2 quantmod_0.4.17
## [37] magrittr_1.5 crayon_1.3.4
## [39] mclust_5.4.6 evaluate_0.14
## [41] ks_1.11.7 fansi_0.4.1
## [43] nlme_3.1-149 MASS_7.3-53
## [45] xts_0.12.1 truncnorm_1.0-8
## [47] blogdown_0.20 tools_4.0.1
## [49] data.table_1.13.0 lifecycle_0.2.0
## [51] stringr_1.4.0 munsell_0.5.0
## [53] locfit_1.5-9.4 compiler_4.0.1
## [55] SkewHyperbolic_0.4-0 rlang_0.4.7
## [57] grid_4.0.1 RCurl_1.98-1.2
## [59] nloptr_1.2.2.2 rappdirs_0.3.1
## [61] htmlwidgets_1.5.2 Rsolnp_1.16
## [63] labeling_0.3 base64enc_0.1-3
## [65] spd_2.0-1 bitops_1.0-6
## [67] rmarkdown_2.4 gtable_0.3.0
## [69] fracdiff_1.5-1 abind_1.4-5
## [71] curl_4.3 R6_2.4.1
## [73] tfruns_1.4 zoo_1.8-8
## [75] tensorflow_2.2.0 knitr_1.30
## [77] dplyr_1.0.2 utf8_1.1.4
## [79] zeallot_0.1.0 KernSmooth_2.23-17
## [81] stringi_1.5.3 Rcpp_1.0.5
## [83] vctrs_0.3.4 png_0.1-7
## [85] tidyselect_1.1.0 xfun_0.18
## [87] lmtest_0.9-38