Main Content

Anomaly Detection in Industrial Machinery Using Three-Axis Vibration Data

This example shows how to detect anomalies in vibration data using machine learning and deep learning. The example uses vibration data is collected from an industrial machine. First, you extract features from the raw measurements corresponding to normal operation using the Diagnostic Feature Designer App. Use the selected features to train each model under consideration. Then use each trained model to identify whether the machine is operating under normal condition.

Dataset

The dataset contains 3-axis vibration measurements from an industrial machine. The data is collected both immediately before and after a scheduled maintenance. The data collected after scheduled maintenance is assumed to represent normal operating condition of the machine. The before maintenance data can represent either normal or anomalous conditions. Data for each axis is stored in a separate column and each file contains 7000 measurements. Load the data from the MathWorks supportfiles site and plot a sample of the normal and anomalous dataset.

url = 'https://ssd.mathworks.com/supportfiles/predmaint/anomalyDetection3axisVibration/v1/vibrationData.zip';
websave('vibrationData.zip',url);
unzip('vibrationData.zip');
load("MachineData.mat")
trainData
trainData=40×4 table
          ch1                 ch2                 ch3           label 
    ________________    ________________    ________________    ______

    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
    {70000×1 double}    {70000×1 double}    {70000×1 double}    Before
      ⋮

To better understand the data, visualize it before and after maintenance. Pick the fourth member of the ensemble for the plot and you can see that the data for the two conditions looks noticeably different.

ensMember = 4;
helperPlotVibrationData(trainData, ensMember)

Extracting Features with Diagnostic Feature Designer App

Using raw data for training machine learning models is not very efficient. The Diagnostic Feature Designer App lets you interactively explore and preprocess your data, extract time and frequency domain features, then rank the features to determine which will be most effective. You can then export a function to extract the selected features from your dataset programmatically. Open the DFD app:

diagnosticFeatureDesigner

Click the New Session button, select the trainData as the source and then set label as a condition variable. The label variable identifies the condition of the machine for the corresponding data.

You can use the Diagnostic Feature Designer to iterate on the features and rank them. A histogram view for all generated features is created to visualize the distributions separated by labels for various features extracted from ch1. Note that the histograms shown below are from a much larger dataset, so the separation is easier to visualize. The dataset used so far is a subset of the original data, so your results might be different.

Use the top four ranked features for each channel.

  • ch1 : Crest Factor, Kurtosis, RMS, Std

  • ch2 : Mean, RMS, Skewness, Std

  • ch3 : Crest Factor, SINAD, SNR, THD

Export a function to generate the features from the Diagnostic Feature designer app and save it with the name generateFeatures. This function extracts the top 4 relevant features from each channel in the entire dataset from the command line.

trainFeatures = generateFeatures(trainData);
head(trainFeatures)
ans=8×13 table
    label     ch1_stats/Col1_CrestFactor    ch1_stats/Col1_Kurtosis    ch1_stats/Col1_RMS    ch1_stats/Col1_Std    ch2_stats/Col1_Mean    ch2_stats/Col1_RMS    ch2_stats/Col1_Skewness    ch2_stats/Col1_Std    ch3_stats/Col1_CrestFactor    ch3_stats/Col1_SINAD    ch3_stats/Col1_SNR    ch3_stats/Col1_THD
    ______    __________________________    _______________________    __________________    __________________    ___________________    __________________    _______________________    __________________    __________________________    ____________________    __________________    __________________

    Before              2.2811                      1.8087                   2.3074                2.3071               -0.032332              0.64962                   4.523                  0.64882                    11.973                    -15.945                -15.886                -2.732      
    Before              2.3276                      1.8379                   2.2613                 2.261                -0.03331              0.59458                   5.548                  0.59365                    10.284                    -15.984                -15.927               -2.7507      
    Before              2.3276                      1.8626                   2.2613                2.2612               -0.012052              0.48248                  4.3638                  0.48233                    8.9125                    -15.858                -15.798               -2.7104      
    Before              2.8781                      2.1986                   1.8288                1.8285               -0.005049              0.34984                  2.3324                  0.34981                    11.795                    -16.191                 -16.14               -3.0683      
    Before              2.8911                        2.06                   1.8205                1.8203              -0.0018988              0.27366                  1.7661                  0.27365                    11.395                    -15.947                -15.893               -3.1126      
    Before              2.8979                      2.1204                   1.8163                1.8162              -0.0044174               0.3674                  2.8969                  0.36737                    11.685                    -15.963                -15.908               -2.9761      
    Before              2.9494                        1.92                   1.7846                1.7844              -0.0067284              0.36262                  4.1308                  0.36256                    12.396                    -15.999                -15.942               -2.8281      
    Before              2.5106                      1.6774                   1.7513                1.7511              -0.0089548              0.32348                  3.7691                  0.32335                    8.8808                     -15.79                -15.732               -2.9532      

Train Models for Anomaly Detection

The dataset used so far is only a small subset of a much larger dataset to illustrate the process of feature extraction and selection. Training your algorithm on all available data yields the best performance. To this end, load the 12 features that were previously extracted from the larger dataset of 17,642 signals.

load("FeatureEntire.mat")
head(featureAll)
ans=8×13 table
    label     ch1_stats/Col1_CrestFactor    ch1_stats/Col1_Kurtosis    ch1_stats/Col1_RMS    ch1_stats/Col1_Std    ch2_stats/Col1_Mean    ch2_stats/Col1_RMS    ch2_stats/Col1_Skewness    ch2_stats/Col1_Std    ch3_stats/Col1_CrestFactor    ch3_stats/Col1_SINAD    ch3_stats/Col1_SNR    ch3_stats/Col1_THD
    ______    __________________________    _______________________    __________________    __________________    ___________________    __________________    _______________________    __________________    __________________________    ____________________    __________________    __________________

    Before              2.3683                       1.927                   2.2225                2.2225               -0.015149              0.62512                  4.2931                  0.62495                    5.6569                    -5.4476                -4.9977               -4.4608      
    Before               2.402                      1.9206                   2.1807                2.1803               -0.018269              0.56773                  3.9985                  0.56744                    8.7481                    -12.532                -12.419               -3.2353      
    Before              2.4157                      1.9523                   2.1789                2.1788              -0.0063652              0.45646                  2.8886                  0.45642                    8.3111                    -12.977                -12.869               -2.9591      
    Before              2.4595                      1.8205                     2.14                2.1401               0.0017307              0.41418                  2.0635                  0.41418                    7.2318                    -13.566                -13.468               -2.7944      
    Before              2.2502                      1.8609                   2.3391                 2.339              -0.0081829               0.3694                  3.3498                  0.36931                    6.8134                     -13.33                -13.225               -2.7182      
    Before              2.4211                      2.2479                   2.1286                2.1285                0.011139              0.36638                  1.8602                  0.36621                    7.4712                    -13.324                -13.226               -3.0313      
    Before              3.3111                      4.0304                   1.5896                1.5896              -0.0080759              0.47218                  2.1132                  0.47211                    8.2412                     -13.85                -13.758               -2.7822      
    Before              2.2655                      2.0656                   2.3233                2.3233              -0.0049447              0.37829                  2.4936                  0.37827                    7.6947                    -13.781                -13.683               -2.5601      

Use cvpartition to partition data into a training set and an independent test set. helperExtractLabeledData helper function is used to find all features corresponding to the label 'After' in featureTrain variable.

rng(0) % set for reproducibility
idx = cvpartition(featureAll.label, 'holdout', 0.1);
featureTrain = featureAll(idx.training, :);
featureTest = featureAll(idx.test, :);

For each model, only train on the data after maintenance which is assumed to be normal. Extract only the data after maintenance from featureTrain.

trueAnomaliesTest = featureTest.label;
featureNormal = featureTrain(featureTrain.label=='After', :);

Detect Anomalies with One-Class SVM

Support Vector Machines are powerful classifiers, and their variant, which trains on only one class, models the normal data. This model works well for identifying abnormalities as being "far" from the normal data. Select the training data for normal condition and train the SVM model using fitcsvm function.

mdlSVM = fitcsvm(featureNormal, 'label', 'Standardize', true, 'OutlierFraction', 0);

Validate the trained SVM model by using test data which contains both normal and anomalous data.

featureTestNoLabels = featureTest(:, 2:end);
[~,scoreSVM] = predict(mdlSVM,featureTestNoLabels);
isanomalySVM = scoreSVM<0;
predSVM = categorical(isanomalySVM, [1, 0], ["Anomaly", "Normal"]);
trueAnomaliesTest = renamecats(trueAnomaliesTest,["After","Before"], ["Normal","Anomaly"]);
figure;
confusionchart(trueAnomaliesTest, predSVM, Title="Anomaly Detection with One-class SVM", Normalization="row-normalized");

From the confusion matrix, it is seen that the one-class SVM does pretty well. Only 0.3% of anomalous samples are misclassified as normal and about 0.9% of normal data is misclassified as anomalous.

Detect Anomalies with Isolation Forest

The decision trees of an isolation forest isolate each observation in a leaf. How many decisions a sample passes through to get to its leaf is a measure of how complicated it was to isolate it from the others. The average depth of trees for a specific sample is used as their anomaly score and returned by the iforest. Once again, train the isolation forest model on normal data only.

[mdlIF,~,scoreTrainIF] = iforest(featureNormal{:,2:13},'ContaminationFraction',0.09);

Validate the trained isolation forest model by using test data which contains both normal and anomalous data. Visualize the performance of this model by using a Confusion Chart.

[isanomalyIF,scoreTestIF] = isanomaly(mdlIF,featureTestNoLabels.Variables);
predIF = categorical(isanomalyIF, [1, 0], ["Anomaly", "Normal"]);
figure;
confusionchart(trueAnomaliesTest,predIF,Title="Anomaly Detection with Isolation Forest",Normalization="row-normalized");

On this data, the isolation forest doesn't do as well as the one-class SVM, but it's not your typical anomaly detection with so many "anomalies" in this almost balanced data set. The reason for this is that the training data contained normal data only while the test data contains about 30% anomalous data. This indicates that the isolation forest model is better suited when the distribution of the anomalies and normal data is closer.

Detect Anomalies with LSTM Autoencoder Network

Autoencoders are a type of neural network that learn a compressed representation of unlabeled data. LSTM autoencoders are a variant of this network, which can learn a compressed representation of sequence data. This LSTM autoencoder will be trained with only normal data and the trained network will be used to identify when the sequence does not look like normal data is encountered.

featuresAfter = helperExtractLabeledData(featureTrain, ...
   "After");

Then, construct the biLSTM autoencoder network and set the training options.

featureDimension = 1;

% Define biLSTM network layers
layers = [ sequenceInputLayer(featureDimension, 'Name', 'in')
   bilstmLayer(16, 'Name', 'bilstm1')
   reluLayer('Name', 'relu1')
   bilstmLayer(32, 'Name', 'bilstm2')
   reluLayer('Name', 'relu2')
   bilstmLayer(16, 'Name', 'bilstm3')
   reluLayer('Name', 'relu3')
   fullyConnectedLayer(featureDimension, 'Name', 'fc')
   regressionLayer('Name', 'out') ];

% Set Training Options
options = trainingOptions('adam', ...
   'Plots', 'training-progress', ...
   'MiniBatchSize', 500,...
   'MaxEpochs',200);

The MaxEpochs training options parameter is set to 200. For higher validation accuracy, you can set this parameter to a larger number. However, this might cause the network to overfit. Now train the model.

net = trainNetwork(featuresAfter, featuresAfter, layers, options);
Training on single CPU.
|========================================================================================|
|  Epoch  |  Iteration  |  Time Elapsed  |  Mini-batch  |  Mini-batch  |  Base Learning  |
|         |             |   (hh:mm:ss)   |     RMSE     |     Loss     |      Rate       |
|========================================================================================|
|       1 |           1 |       00:00:04 |         5.81 |         16.9 |          0.0010 |
|       3 |          50 |       00:00:16 |         5.43 |         14.8 |          0.0010 |
|       5 |         100 |       00:00:28 |         3.99 |          7.9 |          0.0010 |
|       8 |         150 |       00:00:39 |         4.27 |          9.1 |          0.0010 |
|      10 |         200 |       00:00:51 |         3.48 |          6.1 |          0.0010 |
|      13 |         250 |       00:01:03 |         3.97 |          7.9 |          0.0010 |
|      15 |         300 |       00:01:15 |         3.17 |          5.0 |          0.0010 |
|      18 |         350 |       00:01:27 |         3.72 |          6.9 |          0.0010 |
|      20 |         400 |       00:01:39 |         2.89 |          4.2 |          0.0010 |
|      23 |         450 |       00:01:51 |         3.49 |          6.1 |          0.0010 |
|      25 |         500 |       00:02:03 |         2.67 |          3.6 |          0.0010 |
|      28 |         550 |       00:02:15 |         3.31 |          5.5 |          0.0010 |
|      30 |         600 |       00:02:27 |         2.49 |          3.1 |          0.0010 |
|      33 |         650 |       00:02:39 |         3.14 |          4.9 |          0.0010 |
|      35 |         700 |       00:02:51 |         2.29 |          2.6 |          0.0010 |
|      38 |         750 |       00:03:03 |         2.96 |          4.4 |          0.0010 |
|      40 |         800 |       00:03:16 |         2.12 |          2.2 |          0.0010 |
|      43 |         850 |       00:03:28 |         2.81 |          4.0 |          0.0010 |
|      45 |         900 |       00:03:40 |         1.98 |          2.0 |          0.0010 |
|      48 |         950 |       00:03:51 |         2.71 |          3.7 |          0.0010 |
|      50 |        1000 |       00:04:04 |         1.89 |          1.8 |          0.0010 |
|      53 |        1050 |       00:04:16 |         2.62 |          3.4 |          0.0010 |
|      55 |        1100 |       00:04:28 |         1.81 |          1.6 |          0.0010 |
|      58 |        1150 |       00:04:41 |         2.54 |          3.2 |          0.0010 |
|      60 |        1200 |       00:04:53 |         1.74 |          1.5 |          0.0010 |
|      63 |        1250 |       00:05:06 |         2.47 |          3.0 |          0.0010 |
|      65 |        1300 |       00:05:18 |         1.66 |          1.4 |          0.0010 |
|      68 |        1350 |       00:05:31 |         2.38 |          2.8 |          0.0010 |
|      70 |        1400 |       00:05:43 |         1.53 |          1.2 |          0.0010 |
|      73 |        1450 |       00:05:55 |         2.28 |          2.6 |          0.0010 |
|      75 |        1500 |       00:06:08 |         1.44 |          1.0 |          0.0010 |
|      78 |        1550 |       00:06:20 |         2.20 |          2.4 |          0.0010 |
|      80 |        1600 |       00:06:33 |         1.36 |          0.9 |          0.0010 |
|      83 |        1650 |       00:06:46 |         2.14 |          2.3 |          0.0010 |
|      85 |        1700 |       00:07:00 |         1.29 |          0.8 |          0.0010 |
|      88 |        1750 |       00:07:14 |         2.07 |          2.1 |          0.0010 |
|      90 |        1800 |       00:07:28 |         1.22 |          0.8 |          0.0010 |
|      93 |        1850 |       00:07:42 |         2.01 |          2.0 |          0.0010 |
|      95 |        1900 |       00:07:56 |         1.16 |          0.7 |          0.0010 |
|      98 |        1950 |       00:08:10 |         1.95 |          1.9 |          0.0010 |
|     100 |        2000 |       00:08:24 |         1.10 |          0.6 |          0.0010 |
|     103 |        2050 |       00:08:38 |         1.90 |          1.8 |          0.0010 |
|     105 |        2100 |       00:08:52 |         1.05 |          0.5 |          0.0010 |
|     108 |        2150 |       00:09:06 |         1.86 |          1.7 |          0.0010 |
|     110 |        2200 |       00:09:20 |         1.00 |          0.5 |          0.0010 |
|     113 |        2250 |       00:09:34 |         1.82 |          1.6 |          0.0010 |
|     115 |        2300 |       00:09:48 |         0.96 |          0.5 |          0.0010 |
|     118 |        2350 |       00:10:02 |         1.78 |          1.6 |          0.0010 |
|     120 |        2400 |       00:10:16 |         0.91 |          0.4 |          0.0010 |
|     123 |        2450 |       00:10:30 |         1.74 |          1.5 |          0.0010 |
|     125 |        2500 |       00:10:44 |         0.88 |          0.4 |          0.0010 |
|     128 |        2550 |       00:10:58 |         1.71 |          1.5 |          0.0010 |
|     130 |        2600 |       00:11:12 |         0.84 |          0.4 |          0.0010 |
|     133 |        2650 |       00:11:26 |         1.68 |          1.4 |          0.0010 |
|     135 |        2700 |       00:11:40 |         0.81 |          0.3 |          0.0010 |
|     138 |        2750 |       00:11:54 |         1.65 |          1.4 |          0.0010 |
|     140 |        2800 |       00:12:08 |         0.78 |          0.3 |          0.0010 |
|     143 |        2850 |       00:12:22 |         1.62 |          1.3 |          0.0010 |
|     145 |        2900 |       00:12:36 |         0.76 |          0.3 |          0.0010 |
|     148 |        2950 |       00:12:49 |         1.59 |          1.3 |          0.0010 |
|     150 |        3000 |       00:13:02 |         0.73 |          0.3 |          0.0010 |
|     153 |        3050 |       00:13:15 |         1.57 |          1.2 |          0.0010 |
|     155 |        3100 |       00:13:29 |         0.71 |          0.3 |          0.0010 |
|     158 |        3150 |       00:13:42 |         1.55 |          1.2 |          0.0010 |
|     160 |        3200 |       00:13:55 |         0.68 |          0.2 |          0.0010 |
|     163 |        3250 |       00:14:09 |         1.52 |          1.2 |          0.0010 |
|     165 |        3300 |       00:14:22 |         0.66 |          0.2 |          0.0010 |
|     168 |        3350 |       00:14:36 |         1.50 |          1.1 |          0.0010 |
|     170 |        3400 |       00:14:50 |         0.64 |          0.2 |          0.0010 |
|     173 |        3450 |       00:15:03 |         1.48 |          1.1 |          0.0010 |
|     175 |        3500 |       00:15:16 |         0.62 |          0.2 |          0.0010 |
|     178 |        3550 |       00:15:30 |         1.46 |          1.1 |          0.0010 |
|     180 |        3600 |       00:15:43 |         0.60 |          0.2 |          0.0010 |
|     183 |        3650 |       00:15:56 |         1.44 |          1.0 |          0.0010 |
|     185 |        3700 |       00:16:10 |         0.59 |          0.2 |          0.0010 |
|     188 |        3750 |       00:16:23 |         1.42 |          1.0 |          0.0010 |
|     190 |        3800 |       00:16:37 |         0.57 |          0.2 |          0.0010 |
|     193 |        3850 |       00:16:50 |         1.40 |          1.0 |          0.0010 |
|     195 |        3900 |       00:17:04 |         0.55 |          0.2 |          0.0010 |
|     198 |        3950 |       00:17:17 |         1.38 |          1.0 |          0.0010 |
|     200 |        4000 |       00:17:31 |         0.54 |          0.1 |          0.0010 |
|========================================================================================|
Training finished: Reached final iteration.

Visualize Model Behavior and Error on Validation Data

Extract and visualize a sample each from Anomalous and Normal condition. Note that the plots below are comparing the error in the values of each of the 12 features (indicated on the x-axis). In this sample, features 10, 11 and 12 do not reconstruct well for the anomalous input and thus have a high error value. The model uses this error value to identify an anomaly.

testNormal = {featureTest(1200, 2:end).Variables};
testAnomaly = {featureTest(200, 2:end).Variables};

% Predict decoded signal for both
decodedNormal = predict(net,testNormal);
decodedAnomaly = predict(net,testAnomaly);

% Visualize
helperVisualizeModelBehavior(testNormal, testAnomaly, decodedNormal, decodedAnomaly)

Extract features for normal and anomalous data. The figure shows that the reconstruction error for the anomalous data is significantly higher than the normal data. This makes sense, since the autoencoder was trained on the normal data, so it will better reconstruct similar signals.

% Extract data Before maintenance
XTestBefore = helperExtractLabeledData(featureTest, "Before");

% Predict output before maintenance and calculate error
yHatBefore = predict(net, XTestBefore);
errorBefore = helperCalculateError(XTestBefore, yHatBefore);

% Extract data after maintenance
XTestAfter = helperExtractLabeledData(featureTest, "After");

% Predict output after maintenance and calculate error
yHatAfter = predict(net, XTestAfter);
errorAfter = helperCalculateError(XTestAfter, yHatAfter);

helperVisualizeError(errorBefore, errorAfter);

Identify Anomalies

Calculate the error on the full validation data.

XTestAll = helperExtractLabeledData(featureTest, "All");

yHatAll = predict(net, XTestAll);
errorAll = helperCalculateError(XTestAll, yHatAll);

Define an anomaly as a point which has reconstruction error 0.5 times larger than the mean across all observations. This threshold was determined through experimentation and can be changed as required.

thresh = 0.5;
anomalies = errorAll > thresh*mean(errorAll);

helperVisualizeAnomalies(anomalies, errorAll, featureTest);

Supporting functions

function E = helperCalculateError(X, Y)
% HELPERCALCULATEERROR function calculates the rms error value between the
% inputs X, Y
E = zeros(length(X),1);
for i = 1:length(X)
   E(i,:) = sqrt(sum((Y{i} - X{i}).^2));
end

end

function helperVisualizeError(errorBefore, errorAfter)
% HELPERVISUALIZEERROR creates a plot to visualize the errors on detecting
% before and after conditions
figure("Color", "W")
tiledlayout("flow")

nexttile
plot(1:length(errorBefore), errorBefore, 'LineWidth',1.5), grid on
title(["Before Maintenance", ...
   sprintf("Mean Error: %.2f\n", mean(errorBefore))])
xlabel("Observations")
ylabel("Reconstruction Error")
ylim([0 15])

nexttile
plot(1:length(errorAfter), errorAfter, 'LineWidth',1.5), grid on,
title(["After Maintenance", ...
   sprintf("Mean Error: %.2f\n", mean(errorAfter))])
xlabel("Observations")
ylabel("Reconstruction Error")
ylim([0 15])

end

function helperVisualizeAnomalies(anomalies, errorAll, featureTest)
% HELPERVISUALIZEANOMALIES creates a plot of the detected anomalies
anomalyIdx = find(anomalies);
anomalyErr = errorAll(anomalies);

predAE = categorical(anomalies, [1, 0], ["Anomaly", "Normal"]);
trueAE = renamecats(featureTest.label,["Before","After"],["Anomaly","Normal"]);

acc = numel(find(trueAE == predAE))/numel(predAE)*100;
figure;
t = tiledlayout("flow");
title(t, "Test Accuracy: " + round(mean(acc),2) + "%");
nexttile
hold on
plot(errorAll)
plot(anomalyIdx, anomalyErr, 'x')
hold off
ylabel("Reconstruction Error")
xlabel("Observation")
legend("Error", "Candidate Anomaly")

nexttile
confusionchart(trueAE,predAE)

end

function helperVisualizeModelBehavior(normalData, abnormalData, decodedNorm, decodedAbNorm)
%HELPERVISUALIZEMODELBEHAVIOR Visualize model behavior on sample validation data

figure("Color", "W")
tiledlayout("flow")

nexttile()
hold on
colororder('default')
yyaxis left
plot(normalData{:})
plot(decodedNorm{:},":","LineWidth",1.5)
hold off
title("Normal Input")
grid on
ylabel("Feature Value")
yyaxis right
stem(abs(normalData{:} - decodedNorm{:}))
ylim([0 2])
ylabel("Error")
legend(["Input", "Decoded","Error"],"Location","southwest")

nexttile()
hold on
yyaxis left
plot(abnormalData{:})
plot(decodedAbNorm{:},":","LineWidth",1.5)
hold off
title("Abnormal Input")
grid on
ylabel("Feature Value")
yyaxis right
stem(abs(abnormalData{:} - decodedAbNorm{:}))
ylim([0 2])
ylabel("Error")
legend(["Input", "Decoded","Error"],"Location","southwest")

end

function X = helperExtractLabeledData(featureTable, label)
%HELPEREXTRACTLABELEDDATA Extract data from before or after operating
%conditions and re-format to support input to autoencoder network

% Select data with label After
if label == "All"
   Xtemp = featureTable(:, 2:end).Variables;
else
   tF = featureTable.label == label;
   Xtemp = featureTable(tF, 2:end).Variables;
end

% Arrange data into cells
X = cell(length(Xtemp),1);
for i = 1:length(Xtemp)
   X{i,:} = Xtemp(i,:);
end

end