Understanding Adaptive LSTM-Autoencoder Change Point Detection

As discussed in our previous post, Change Point Detection (CPD) is an important part of time-series analysis used in numerous fields such as Medicine, Aerospace, Finance, Business, Metrology and Entertainment.

In this post, we expand our analysis of an adaptive, online algorithm for Change Point Detection based on the paper “Memory-free Online Change-point Detection: A Novel Neural Network Approach” by Atashgahi et. al. which describes the Adaptive LSTM-Autoencoder Change Point Detection (ALACPD) algorithm [1].  The original TensorFlow implementation of ALACPD, described in this post is available on GitHub at [2].

There are three main phases to the ALACPD algorithm: 1. Preparation, 2. Offline Training, and 3. Online Training.

ALACPD Phases
Change Point Detection Algorithm

The basic algorithm of the ALACPD to detect change points in an unsupervised manner is uses an ensemble of 3 or more identical models (all with different, randomly initialized weights) that are all run on the current input data.  Upon detecting a consensus of models calculating losses higher than a given threshold, it is determined that an anomaly has been found, but not yet a change point.  Upon detecting an anomaly consensus 3 more consecutive times, the anomaly is determined to be a change point.

ALACPD Basic Algorithm

The following steps occur when detecting change points.

  1. The ensemble of 3 or more models is run on the same data (the randomly initialized weights will be slightly different after off-line training) to produce the corresponding 3 or more loss values.
  2. A count of loss values exceeding a pre-specified threshold is made.
  3. If a consensus of counts exceeds the threshold an anomaly has been detected and the anomaly counter is increased. If no consensus is reached, the anomaly counter is reset to zero.
  4. Once the anomaly counter exceeds the change point detection number a change point has been detected. For example, if the change point detection number = 3, a change point is detected after four consecutive anomalies have been detected.
ALACPD Results

For the following results, the original APACPD model [2] was run on the SPY ETF adjusted close prices over the past 5 years, from 2019 to December 2023.  This period covers both the COVID crash and large rally that followed.

ALACPD Results on SPY ETF 2019 – 2023

After running the ALACPD algorithm a number of change points (red lines) were detected – we manually added blue bar annotations where change points were either missed or detected without data support.

From a visual inspection, we can see that the model does detect change points well but has a few missing gaps.  For example, it is unclear why the model did not detect any change points during the 2022 downturn even though there were many price and volume variations during that time period that would appear to be clear change points.

Now we will discuss how each of the three phases of this adaptive change point detection model work.

Phase 1 – Preparation

During the preparation phase, the data is preprocessed, loaded, and the models are created.  Data preprocessing involves loading the data from a Json file, scaling it with the StandardScaler, then saving the scaled data to a text file.

Data Preprocessing

The following steps occur during data preprocessing.

  1. First the Json file is loaded into the data numpy array with a shape of (observations, dimensions).
  2. The data numpy array is then scaled using the StandardScaler which centers and scales the data to a unit variance.
  3. The scaled data is then saved to a text file later load during the training phases.

The data pre-processing is performed by the load_data function in the utils.py file.

Loading The Data

During the preparation phase, the data is loaded into a DataUtil object that manages the train, test, and valid datasets, all originating from the text file of data previously discussed.

Loading the Data

During the data loading process, the following steps occur.

  1. The data text file is loaded into the raw_data numpy array with a dimension of (observations, dimension).
  2. Since the data was previously normalized during the data preparation step, no new normalization occurs and the raw_data is copied to the data numpy
  3. Next, the split_data method creates the train and test datasets.
  4. When creating each dataset, the data is organized into X1 and X2 numpy arrays (described later). The Y targets are created as well.
  5. The X1 and X2 numpy arrays are concatenated together to create the X.
  6. Both the training and testing data sets are loaded with their own, separate X and Y
Data Organization

The run_log.txt dataset contains pace and distance data for a long-distance runner and has 376 observations.  The first thirty-seven (0.1 * 376) of the observations make up the training dataset used for offline training.  A data window = 6 and horizon = 4 are used to build the dataset with the starting index for the build set at window * 2 + horizon – 1, which equals 15.  Of the 37 observations, with offsetting, there are 22 items in the training set.

Data Organization for Training Data (click to expand)

When loading the data into the DataUtil object, the following steps occur in the get_data function within the cpdnet_datautil.py file.

4.1) The X1 numpy array is created with shape (22, 1, 6, 2) where there are a total of 22 observations used in the training set, 6 items make up the data window and 2 dimensions make up the pace and distance dimensions.  Six windows of data are loaded into the X1 numpy where each window contains data starting in steps progressing forward by one step.  So, the first window contains items 10 to 15, the next 11 to 16, the next 12 to 17 and so on.  22 such windows are stacked to make up the X1 data.

4.2) The X2 numpy array is created with shape (22, 6, 6, 2) where there are a total of 22 observations used in the training set, 6 items make up a data window, and 2 dimensions make up the pace and distance dimensions.  Six windows of data are loaded into each row of the X2 numpy where each window precedes the other by one step back in the data.  For the first row, the window from index 6 to 11 is then followed by the window from 5 to 10, followed by 4 to 9 and so on.  Since multivariate data is processed, each item contains both the value for pace and distance.  The next row of X2 contains a similar organization but starting 1 step forward in the data, which essentially creates a matrix of values that naturally highlight large changes that are repeated across the matrix.

4.3) The Y value is merely a copy of the X1 data.

Phase 2 – Offline Training

The offline training phase is used to ‘warm-up’ the model by pre-training on the small training set of data.  Once trained, the model is evaluated to create the initial loss_normal values.

Offline Training

The following steps occur during offline training.

  1. First the ensemble of models is trained.
  2. During training, the DataUtil feeds batches of data from the training set using the data ordering previously discussed.
  3. Next, the model ensemble is evaluated to produce a new set of loss_normal
  4. The training data set is used for the evaluation.
  5. The loss_normal values are added to the off-line training section of the input data stream. The data_loss values are used for model analysis and plotting.
Phase 3 – Online Training

The online training consists of detecting anomalies, determining if those anomalies are actual change points and then adaptively training the model on new data to better fit the dynamics of the data stream moving forward.

Online Training Process

Online training cycles through each item in the test data stream one item at a time until the end of the data stream or until new data arrives in a real-time scenario.

Evaluate Mode on Test Data

The model ensemble evaluates the data at the current data idx within the training dataset to produce the loss values for each model within the ensemble.

Model Evaluation

The following steps occur during the model evaluation.

  1. The DataUtil builds the data packet for the current index in the training dataset (as described in Data Organization above).
  2. The data packet makes up the x_in and y_in values, yet only the x_in is used during the evaluation step.
  3. The ensemble of models is run through a forward pass to evaluate the data…
  4. …and produces the ensemble of corresponding loss values loss_test_i0, loss_test_i1, and loss_test_i1.
Anomaly Detection

During anomaly detection, a count of all loss values exceeding a pre-specified threshold is taken.  If a consensus of models in the ensemble is above the threshold an anomaly is detected.

Anomaly Detection

The following steps occur when detecting an anomaly.

  1. Using the loss values output from the model ensemble evaluation…
  2. … a count of loss values exceeding an external threshold is taken.
  3. If a consensus of losses (e.g., 2 of 3) exceed the threshold, an anomaly is detected.
Change Point Detection

On detecting each anomaly, an anomaly counter is increased, and once a pre specified number of consecutive anomalies is found, a change point is detected.

Change Point Detection

The following steps occur during change point detection.

  1. On detecting an anomaly, the current index in the training data set is appended to the ano_idx collection (later used during on-line training). Next, the anomaly counter is increased.
  2. Once the anomaly counter of consecutive anomalies exceeds a pre specified anomaly counter threshold (e.g., num_ano_cpd = 3) …
  3. … it is determined that a change point has been detected at the index.
Online Training (Adaptive Learning)

During on-line training, there are two methods of training, one method that occurs when no anomaly is detected, and the other that occurs when a change point is detected.

Online Training

The following steps occur in an on-line training cycle where no anomalies are found.

  1. In the case where no anomaly is detected, the same data originally evaluated during the Evaluate Model on Test Data step described above is used for training.
  2. A short 5 epoch training cycle is run on the original x_in, y_in

The following steps occur in an on-line training cycle where a change point is found.

  1. During the detection of change points, the indexes of each anomaly are saved in the ano_idx array. The data values at these indices are now used during training.
  2. The test data from the DataUtil is used to build the input data by…
  3. …extracting the data from the test data at the ano_idx indexes to create the x and y.
  4. The ensemble of models is trained on the x and y arrays for 5o to 100 epochs.
ALACPD Full Process

For the full ALACPD Change Point Detection process with adaptive learning, click on the image below.

ALACPD Full Process
Model Details

Now that you have a better understanding of how the change detection algorithm works, what about the model itself?  The ALACPD algorithm uses a custom LSTM layer called the Skip LSTM with Autoregression, or AE_skipLSTM_AR for short.  The model ensemble comprises 3 or more of the same AE_skipLSTM_AR models.

AE_skipLSTM_AR Model

When processing the X input to produce the Yhat predictions, the model takes the following steps.

  1. First the X (1,7,6,2) input is sliced into the X1 (1,1,6,2) and X2 (1,6,6,2) tensors.
  2. The X1 tensor is fed into an RNN encoder layer that uses the specialized skipLSTMCell for the encoder processing.
  3. The RE output is repeated six times to create the P tensor.
  4. The P tensor is then fed into the RNN decoder layer that uses the specialized skipLSTMCell as well to produce the RD tensor.
  5. A Flatten layer changes the RD (1,6,20) to RD (1,120) by flattening the last two axes.
  6. The flattened RD is fed into a Dense linear layer to produce the RD (1,12) output.
  7. A Reshape layer splits the last two axes into the Y (1,6,2) tensor.
  8. Next, the auto regressive steps start processing the X2 tensor by looping through each of the 6 items in the X2 (1,6,6,2) shape[1]. The following steps occur at each iteration.
  9. First the X2 is split into the tensor at that iteration to create the Zi (1,1,6,2) tensor.
  10. A PreARTrans is used to reformat the Zi for the autoregression steps.
  11. The Zi tensor is then flattened to the shape (1,1,6) …
  12. … and run through a Dense linear layer to produce the Zi (1,1,1) output.
  13. A PostARTrans is run to reformat the data back producing the Zi (1,1,2) tensor. The loop continues back at step 9 until each of the 6 items in the tensor are processed.
  14. All Zi tensors are concatenated into the Z (1,6,2) output.
  15. The Z tensor is added to the Y from step #7 above to produce the Yhat (1,6,2) output.
  16. An MSELoss layer is run to calculate the loss between the Yhat and Y tensors to produce the final output loss value.
SkipLSTMCell Layer

Both the RNN encoder and RNN decoder layers use the skipLSTMCell layer for processing each step within the respective recurrent neural networks.

skipLSTMCell Layer

When processing the x1 input at each step of the RNN, the skipLSTMCell layer takes the following steps to produce the RE1 output for the input item.

  1. First the X1 input is copied into the inputs_i, inputs_f, inputs_c and inputs_o tensors all with shape (1,2).
  2. Next, the Add_weight kernel is used to create the k_i, k_f, k_c, and k_o weights each of shape (2,20).
  3. The inputs and k weights are each run through a dot operation to produce the x_i, x_f, x_c and x_o tensors of shape (1,20).
  4. Next the Add_bias kernel is used to create the b_i, b_f, b_c, and b_o bias values each of shape (1,20).
  5. The bias values are added to the previous x values to produce the x_i, x_f, x_c and x_o tensors of shape (1,20).
  6. Next, the h_tm1 weights are duplicated into a 4-element array h_tm.
  7. The copute_carry_and_output function is run to produce the c and o tensors (more on this later).
  8. The c tensor is run through a Tanh and multiplied by the o tensor…
  9. … and the result produces the h tensor (which will later be used to update the prev_h tensor).
  10. A MatMul is performed between the prev_h (1,60) tensor and the kernel2 weights and the results…
  11. …are added to the bias2 weights, whose results are…
  12. … run though a Sigmoid function to produce the new_h_skip (1,20) tensor.
  13. The new_h_skip tensor is multiplied by (1-S0) where S0 is a hyper parameter and added to the h (1,20) tensor multiplied by S0.
  14. This produces the new, updated h tensor (that is later returned as RE1).
  15. The prev_h tensor is updated by first rolling the tensor…
  16. … which allows for the old h values to be replaced with the new as the prev_h tensor of size (1,60) holds three sets of h (1,20) tensor values.
  17. The h tensor (1,20) is returned as the RE1. The step + 1, c and prev_h tensors are also returned, but not used.
compute_carry_and_output Function

The compute_carry_and_output function called previously in step #7, calculates the o and c gate values.

compute_carry_and_output Function

The following steps occur within the compute_carry_and_output function when calculating the o and c gates.

  1. First the array of h_tm (1,20)[4] values are separated into the h_tm_i, h_tm_f, h_tm_c, and ­h_tm_o tensors.
  2. Next, the i gate is processed.
    1. A dot is performed between the h_tm_i (1,20) weight and associated recurrent kernel weights.
    2. The result is added to the x_i (1,20) input tensor…
    3. … and run through a Hard Sigmoid to produce the i (1,20) gate tensor.
  3. Next, the f gate is processed.
    1. A dot is performed between the h_tm_f (1,20) weight and associated recurrent kernel weights.
    2. The result is added to the x_f (1,20) input tensor…
    3. … and run through a Hard Sigmoid to produce the f (1,20) gate tensor.
  4. Next, the c gate is processed.
    1. A dot is performed between the h_tm_f (1,20) weight and associated recurrent kernel weights.
    2. The result is added to the x_c (1,20) input tensor…
    3. … and run through a Hard Sigmoid whose result…
    4. … is multiplied by the i (1,20) gate tensor, and…
    5. … added to the c_tm1 weight tensor, then…
    6. … multiplied by the f (1,20) gate tensor to produce the c (1,20) gate tensor.
  5. And finally, the o (1,20) gate is processed.
    1. A dot is performed between the h_tm_o (1,20) weight and associated recurrent kernel weights.
    2. The result is added to the x_o (1,20) input tensor…
    3. … and run through a Hard Sigmoid to produce the o (1,20) gate tensor.

Both the c (1,20) and o (1,20) gate tensors are returned.

PreARTrans and PostARTrans

The PreARTrans run in step #10 is used to reformat the Zi data for autoregressive processing and the PostARTrans run in step #13 is used to reverse the formatting.

Pre/Post AR Trans

The following steps occur in the PreARTrans to alter the Zi (1,1,6,2) input.

  1. The last ‘highway’ number of items is selected from the Zi axis 2 where the highway is a hyperparameter. In this case the highway=6 and is the same size as the axis 2 shape, so no alteration occurs, and the output tensor is of shape (1,1,6,2).
  2. Next, the output tensor is transposed along the last two axes to produce a new output of shape (1,1,2,6).
  3. The output is then reshaped to produce the final Zi tensor of shape (2,1,6)

The PostARTrans reshapes the data back to its original sizing of Zi (1,1,2).

Summary

In this post we have discussed how the ALACPD algorithm works to detect change points in an unsupervised manner by detecting a consensus of increased loss values return from an ensemble of models that each use specialized LSTM layers.

We discuss the overall off-line and on-line processes used to adaptively learn new data patterns and have shown in depth how these models work.  And we have shown the results of running this model on 5 years of SPY ETF daily data showing the power and challenges of adaptive learning non-stationary data.

Happy Deep Learning!


[1] Memory-free Online Change-point Detection: A Novel Neural Network Approach, by Zahra Atashgahi, Decebal Constantin Mocanu, Raymond Veldhuis, and Mykola Pechenizkiy, 2022, arXiv:2207.03932

[2] GitHub:zahraatashgahi/ALACPD, by zahraatashgahi, 2022, GitHub