Graph Spatiotemporal Process for Multivariate Time Series Anomaly Detection with Missing Values
Abstract
The detection of anomalies in multivariate time series data is crucial for various practical applications, including smart power grids, traffic flow forecasting, and industrial process control. However, real-world time series data is usually not well-structured, posting significant challenges to existing approaches: (1) The existence of missing values in multivariate time series data along variable and time dimensions hinders the effective modeling of interwoven spatial and temporal dependencies, resulting in important patterns being overlooked during model training; (2) Anomaly scoring with irregularly-sampled observations is less explored, making it difficult to use existing detectors for multivariate series without fully-observed values. In this work, we introduce a novel framework called GST-Pro, which utilizes a graph spatiotemporal process and anomaly scorer to tackle the aforementioned challenges in detecting anomalies on irregularly-sampled multivariate time series. Our approach comprises two main components. First, we propose a graph spatiotemporal process based on neural controlled differential equations. This process enables effective modeling of multivariate time series from both spatial and temporal perspectives, even when the data contains missing values. Second, we present a novel distribution-based anomaly scoring mechanism that alleviates the reliance on complete uniform observations. By analyzing the predictions of the graph spatiotemporal process, our approach allows anomalies to be easily detected. Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods, regardless of whether there are missing values present in the data. Our code is available: https://github.com/huankoh/GST-Pro
keywords:
Time Series, Anomaly Forecasting, Graph Neural Networks[label1]organization= Department of Computer Science and Information Technology, La Trobe University, country=Australia \affiliation[label2]organization=Department of Data Science and AI, Faculty of IT, Monash University, country=Australia \affiliation[label3]organization= School of Information and Communication Technology, Griffith University, country=Australia \affiliation[label4]organization= Zhejiang University, country=China
1 Introduction
Swift technological advancement has brought an explosive surge in the pervasiveness and volume of time series data. Ranging from health care lee2017big and critical infrastructures su2019robust ; li2021multivariate ; mathur2016swat to spacecrafts hundman2018detecting , various industries now generate data from numerous devices or sensors across time, forming a complex multivariate time series with hundreds to thousands of variables. This surge in multivariate time series data has inevitably led us to place significant reliance on the automatic detection of anomalous events through multivariate time series data to identify, avert, and respond to catastrophic events before and as they occur. Ideally, the detection can be done through algorithms that can be implemented at scale, and are robust to the noise due to imperfections of practical real-world systems. Consequently, there has been a strong demand for the development of robust multivariate time series anomaly detection models garg2021evaluation ; schmidl2022anomaly ; darban2022deep .
Although there is an abundance of multivariate time series data that exhibit normal patterns, anomaly events are typically associated with rare events, so collecting and labeling anomalies are often a daunting task. As a result, unsupervised anomaly detection techniques have been widely explored as a practical solution to the challenging anomaly detection problem. Amongst the proposed techniques, the classical methods include statistical unsupervised models such as ARIMA/VAR yu2016improved , distance-based keogh2005hot or distributional ting2021isolation approaches. However, these methods may have limitations in capturing the non-linear spatial and temporal relationships present in multivariate time series data garg2021evaluation .
More recently, with the flourishing of deep learning (DL), significant advances have been made. Early work from Hundman et al. hundman2018detecting proposed a Long Short-Term Memory (LSTM) network to detect anomalies based on the forecasting errors, and Park et al. park2018multimodal proposed a reconstruction-based LSTM framework based on the reconstruction errors. Nonetheless, LSTM frameworks su2019robust lack explicit modeling of pairwise inter-dependence among variable pairs, which limits their ability to detect complex anomaly events in high-dimensional multivariate time series datazhao2020multivariate . In response, MTAD-GAT zhao2020multivariate and GDN deng2021graph use spatiotemporal graph neural networks (STGNNs) to model spatial and temporal data correlations. GDN, in particular, uses a graph learning layer to learn pairwise correlations, negating the need for a predefined graph, often unavailable in multivariate time series datasets. To date, STGNNs remain as the state-of-the-art anomaly detection models for multivariate time series han2022learning .
Despite significant advancements in multivariate time series anomaly detection, existing deep anomaly detection techniques rely on well-structured, regular time series data that is sampled at a uniform frequency. However, real-world multivariate time series data often has random missing values little2019statistical or non-uniform observations due to irregular sampling frequencies. Random missing values in multivariate time series data are usually due to sensor limitations or transmission interruption, while non-random missing values can be caused by multi-modal sources and process heterogeneity mitra2023learning . Even for small-scale systems, random missing values are almost unavoidable. Hence, at the very least, it is critical to develop robust techniques that can accurately detect anomalous events despite such imperfections. Nonetheless, the irregular multivariate time series anomaly detection problem has not been a well-investigated setting thus far.
To address missing data points, straightforward solutions such as zero padding, interpolation and imputation algorithms beretta2016nearest , and linear predictors durbin2012time can be utilized. This way, the missing value problem in the multivariate time series can be simply resolved as a pre-processing step. However, as will be demonstrated in our experimental results, using a modular pipeline of impute-then-detect approach can lead to significantly weakened anomaly detection performances. This necessitates an alternative approach to detect anomalous events in irregular multivariate time series.
The difficulties of detecting anomaly events in multivariate time series with missing values can be illustrated in Figure 1. Following standard unsupervised anomaly detection setting, the first 6 observations of normal data is used for training models, while the testing data contains normal (first and last) and anomalous (four middle) timestamps that are used to evaluate models in detecting the anomalies.
Firstly, A1 series has no anomalous event, but will records a short burst of values intermittently. While this can be easily learned in a regular series setting, irregular multivariate time series may not record the short burst during normal training periods and models may think these short burst events are anomalous during detection testing. In short, under missing values setting, high quality training data for unsupervised learning is sparse.
Secondly, as shown in right plot B1, the real observed values may be inaccessible during anomalous period, making it hard to detect anomalies in real-time streaming data. Hence, real observed values cannot be safely relied upon to detect anomaly events in real time.
Both A1 and B1 show a time series that record values that move independently. However, in multivariate time series, the variables are intricately related. For example, C1, C2, and C3 are interrelated variables where C1 has a strong negative correlation with C2, while C3 have a strong positive correlation with C2. A deviation from these relationships is thus an anomaly event. In right plot of Figure 1, we see C1 recorded conflicting values with C2 only for first two early timepoints, but not the subsequent anomalous values. In contrast, C3 records the anomalous values in the later periods rather than the early periods. As it is very unlikely that all channels have missing values concurrently under a random missing scenario, we conjecture that if a model fully captures the inter-dependence between the variables, it can detect anomaly events well in irregular multivariate time series even if the data suffers from high missing rate. Hence, it is crucial for an anomaly detection model to explicitly capture the complex pairwise associated relationship (i.e., degree of spatial dependence) between the variables of a multivariate time series.
Based on the above observations, we summarize the challenges for irregular multivariate time series anomaly detection:
-
1.
Sparsity in high quality training data (Challenge 1) The presence of irregularity in multivariate time series can lead to important spatio-temporal patterns being omitted during the training phase of model development.
-
2.
Anomaly scoring with irregular observations (Challenge 2) A model should enable real-time detection of anomaly events despite the inability to ensure complete access to observed values in a multivariate time series.
-
3.
Spatial-temporal dependency modeling (Challenge 3) Multivariate time series analysis requires a deep understanding of spatial-temporal dependency; how to simultaneously capture spatial and temporal dependency given the missing values problem is the ultimate challenge for multivariate time series anomaly detection.
To address these challenges, we propose a novel prediction-based anomaly scorer that leverages our graph spatiotemporal processes to model multivariate time series, whether they contain missing values or not. Specifically, our approach involves imputing the missing values in each variable of the input multivariate time series to generate a set of continuous paths. We then design two neural controlled differential equation (NCDE) processes to model the input data from both spatial and temporal perspectives, addressing the first and third challenges mentioned above at once. By incorporating these processes, we are able to model any multivariate time series, regardless of whether they contain missing values. To address the second challenge, we propose a novel distribution-based anomaly scorer that is built on top of our time series model, providing two significant advantages: (1) It is solely based on model predictions and does not require comparisons with ground truths, avoiding issues arising from missing values when calculating real-time anomaly scores; (2) It is based solely on prediction statistics and does not contain trainable parameters, making it a plug-and-play module that can be even integrated with other time series models beyond our method. By combining the forecaster and anomaly scorer discussed above, our proposed GST-Pro method, as shown in Figure 2, can effectively detect anomalies in arbitrary real-world multivariate time series data in an online and unsupervised manner. The main contributions of this paper are as follows:
-
1.
We propose dynamic graph neural differential equations (DG-NCDEs) to model multivariate time series, particularly those with missing values.
-
2.
We propose a parameter-free anomaly detector for multivariate time series data, built on top of our forecasting model, that can detect anomalies in an online and unsupervised manner.
-
3.
We conduct extensive experiments comparing GST-Pro with state-of-the-art baselines under various settings, demonstrating the superiority of our method.
2 Related Work
In this section, we introduce the related works on unsupervised time series anomaly detection and spatiotemporal graph neural networks.
2.1 Unsupervised Time Series Anomaly Detection
Prior literature on unsupervised time series anomaly detection can be broadly categorized as forecasting-based yu2016improved , reconstruction-based zhao2020multivariate , distance-based keogh2005hot and distribution-based ahmad2017unsupervised methods.
Forecasting-based methods fundamentally rely on forecasting errors to detect anomalies. Particularly, after optimizing a model on normal training data, the model will typically predict the one-step ahead forecast for current timestamp. The forecast values are then compared to the observed values at current timestamp to determine how anomalous the current timestamp is. As such, many classical forecasting models such as ARIMA/VAR yu2016improved can be adapted for this purpose schmidl2022anomaly . Early DL works introduced recurrent neural networks (RNN)hundman2018detecting and, lately, Transformerssong2018attend ; chen2021learning to scale through high dimensional data and model complex non-linear patternsgarg2021evaluation .
Reconstruction-based methods detect anomalies based on reconstruction errors. Conceptually, this involves learning the representation of normal training series and outputs a lossy reconstruction of the input. As the learned representation is optimized for normal data, a high reconstruction error is likely during anomalous periods. Classical reconstructions methods include PCAgarg2021evaluation and AutoEncoder zhang2019deep . Similarly, numerous deep reconstruction models have also been proposed to model the complex spatiotemporal dependencies of multivariate time series, including AutoEncoder (AE) zhang2019deep , Variational AutoEncoder (VAE) park2018multimodal ; su2019robust ; li2021multivariate , Normalizing flows dai2022graphaugmented .
Distance-based methods utilize specialized metrics to compare points or subsequences of a multivariate time series with each other, including kNN and local outlier factor breunig2000lof . On the other hand, distribution approaches detect anomalies by judging the likelihood of an observation after fitting a distribution model to windowed points or subsequences of the time series ting2022new . With a few exceptions of distance-based approaches shen2020timeseries ; shin2020itad , DL architectures are mostly characterized by the reconstruction or forecasting approaches as detailed in the most recent survey darban2022deep .
2.2 Spatial-Temporal Graph Neural Networks
GNNs zhang2022trustworthy have recently become de facto models to analyse graph data such as social and academic network zhang2023demystifying ; zhang2023interaction , drug discovery koh2023psichic ; nguyen2023gpcr ; zheng2023large and natural language koh2022empirical ; koh2022far . To handle dynamic data, GNNs have been exploited under the umbrella of spatial-temporal graph neural networks (STGNNs) seo2018structured ; jin2023survey . As STGNNs can explicitly model fine-grained spatial associations between the multivariate series channels, they are particularly well suited for scenarios where the underlying graph structure remains fixed but the node features dynamically update over time wu2020connecting ; zheng2023correlation . Early work from seo2018structured proposed a recurrent-based STGNN for forecasting multivariate time series by capturing the temporal dependency using RNN and spatial relation using GNN. Since then the STGNNs field has flourished with architectures proposed to tackle various time series applications including forecasting jin2022multivariate and classification duan2022multivariate . For multivariate time series anomaly detection, Deng and Hooi deng2021graph recently proposed GDN that leverages STGNN to make a one-step-ahead forecast and detects anomalies by computing the normalized forecasting errors. MTAD-GAT li2021multivariate , utilizes a joint forecasting and reconstruction-based STGNN to detect anomalies. FuSAGNet extends on MTAD-GAT with a sparsity-constrained joint optimization of STGNN han2022learning .
In spite of the significant advancements, current deep detection techniques require real observations to be completely available at every timestamp, which is not possible under irregular time series settings. In our work, we leverage STGNN with DG-NCDE spatial and temporal modules to handle irregular time series, and propose a simple distributional approach on original forecasts of GST-Pro rather than forecasting errors to achieve robust anomaly detection in real-time.
3 Problem Formulation
A multivariate time series with successive, equal-spaced observations can be defined as where represents the length of the series at current timestamp and is composed of number of univariate channels . In our work, we focus on real-time, multi-modal sensing data that consists of sensor nodes over timestamps. Hence, each univariate channel is also referred to as “sensor” or “node” interchangeably.
For a regular unsupervised multivariate time series anomaly detection, we are required to learn an anomaly classifier or scorer, , that outputs an anomaly score to each timestamp that clearly differentiates anomalous observations and non-anomalous observations. Particularly, the outputs can be conceptualized as an indicator that informs a system operator whether the timestamp is anomalous or not, , where is anomalous observation and is not. The ground truth label that indicates whether a timestamp is anomalous or not is represented as and if the observation is anomalous. As the detection of anomalies takes place when the data is streamed in real-time, models can only rely on past observations to make a decision at every timestamp and cannot reverse their previous decisions.
For an irregular unsupervised multivariate time series anomaly detection, we aim to achieve the same goal but under the presence of missing values in the data. In this study, we consider the missing-at-random scenario rubin1976inference . The missing values issue is present in the training data used for unsupervised training and testing data used for evaluating model detection of anomaly events. Such irregularities are common for practical real-world multivariate time series, and thus is the focus of this work.
4 Methodology
In this section, we explain the architecture of our proposed method, GST-Pro, including two key components: the DG-NCDE-based forecasting head (Section 4.2), and the Gaussian scoring-based anomaly detector (Section 4.3). We begin with the overall architecture design in Section 4.1.
4.1 Overall Architecture
Our approach, GST-Pro, as illustrated in Fig. 2, detects anomalies in multivariate time series with missing values. Initially, GST-Pro imputes missing values to form continuous paths. Then, taking the imputed set of continuous paths, GST-Pro employs spatial and temporal neural controlled differential equation processes to optimize the forecasting module in making a one-step-ahead forecast. We posit that by optimizing these processes for forecasting, the model can effectively discern non-anomalous spatiotemporal dependencies from normal training data.
When there are no anomalies, the module is expected to produce normal forecasts that resembles the values in the non-anomalous training data. Conversely, during periods of anomalies, the module will generate outputs that deviate significantly from the normal forecasts.
Building on this concept, we propose an anomaly scorer that evaluates the abnormality of the forecast values by estimating the probability of anomalies at each timestamp. Hence, the irregularly observed values are used exclusively for one-step-ahead forecasting, while the anomaly score is computed by the scorer solely from historical and current forecasts.
4.2 Dynamic Graph Neural Controlled Differential Equations
A multivariate time series is typically conceptualized as a discrete-time dynamic graph composed of regularly-sampled graph snapshots, denoted as . Here, and define the predetermined graph structure for a sequence of observations, characterizing the underlying connectivity between variables (sensors), while refers to the node features of the snapshot at time . However, in practice, missing values can be present in both the variable and time dimensions due to data sampling or sensor failures, posing a significant challenge for utilizing off-the-shelf dynamic graph neural networks, such as MTGODE jin2022multivariate , as the model for embedding multivariate time series. In this work, we follow choi2022graph and address this challenge by proposing the Dynamic Graph Neural Controlled Differential Equations (DG-NCDE) to model multivariate time series, regardless of the presence of missing values in the data. The formulation of Neural Controlled Differential Equations (NCDEs) kidger2020neural is shown in Eq. 1, which is built on the basis of Neural Ordinary Differential Equations (NODEs) chen2018neural , and the end goal is to learn a CDE function , parameterized by , from the data.
(1) |
In the above equation, the evolution of the hidden state is controlled over time based on , which denotes a continuous path derived from the discrete observations . In this regard, NCDEs can be viewed as a continuous version of Recurrent Neural Networks (RNNs) kidger2020neural and demonstrates superior performance in many time series benchmarks choi2022graph . In addition to the effectiveness of NCDEs, a notable advantage of Eq. 1 is its flexibility in handling input data, as it does not impose strong constraints such as no missing values. Though NCDEs shed light on modeling real-world univariate time series, it remains unclear how to model (discrete-time) dynamic graphs with NCDEs. To address this gap and inspired by sankar2020dysat and choi2022graph , we propose two different processes to model the entangled spatial and temporal dependencies in the input data (detailed in subsections 4.2.1 and 4.2.2). In a nutshell, we model a multivariate time series by solving the following equation that combines spatial and temporal processes together.
(2) |
Here, and denote spatial and temporal NCDE functions, each parameterized by distinct parameter sets, designed to model the inherent spatial and temporal dynamics of the input data, respectively. Our formulation presented in Eq. 2 can be conceptualized as a continuous-time approach to modeling a discrete-time dynamic graph, similar to the method proposed in jin2022multivariate . However, the distinguishing feature of DG-NCDE is its capability to model multivariate time series data, whether or not missing values are present. To evaluate DG-NCDE and calculate the gradients of our method, has to be twice continuously differentiable as same as in NCDEs kidger2020neural . In practice, we first process the original series into sliding window inputs with length , and use natural cubic spline interpolation mckinley1998cubic to generate the continuous path for each window as a pre-processing stage based on the available discrete observations. The sliding window input includes observations where for the forecasting module to make a one-step-ahead forecast. In the subsequent two subsections, we elucidate Eq. 2 through the lenses of spatial and temporal processes, respectively.
4.2.1 Spatial Process
We first model the hidden state trajectory of each variable between graph snapshots from the perspective of message passing controlled by the continuous path .
Formally, we define the spatial NCDE as follows with the bounds and .
(3) |
In terms of the formulation of spatial NCDE function , we approximate the graph convolution with the first-order Chebyshev polynomial as demonstrated in kipf2016semi .
(4) |
where , , and denotes the diagonal degree matrix of . For simplicity, we let denote the set of trainable parameters, e.g., and the parameters in fully-connection layers and , in the above spatial NCDE function. In the case of a predefined graph adjacency matrix is unavailable, we learn it end-to-end with the entire model choi2022graph , i.e., , where denotes a trainable node embedding matrix.
4.2.2 Temporal Process
To learn from temporal dependencies, we introduce another process, known as temporal NCDE, that explicitly models temporal patterns. Formally, we define this process as follows with the same bounds as in Eq. 3.
(5) |
In the above formulation, is the temporal NCDE function, where the evolution of hidden trajectories across time is controlled by the continuous path that generated by the spatial process. There exists a wide range of implementations of . For this study, we have chosen the method presented in choi2022graph , which involves modeling each trajectory, i.e., each column of , with individual fully-connected layers.
(6) |
where and denote ReLU activation and the operation of concatenation, respectively. Similar in Eq. 3 and for simplicity, we use to represent the set of trainable parameters in our temporal NCDE function. Intriguingly, with the configuration illustrated in Eq. 6, we have Eq. 5 transformed into a continuous RNN, which effectively captures the inherent temporal dependencies.
Our formulation of DG-NCDE, as defined in Eq. 2, combines the spatial and temporal processes to model a given multivariate time series. This framework allows for the forecasting of future data points by passing the learned time series representations through an additional fully-connected layer, acting as the downstream forecaster. With forecast output for each timestamp denoted as , we provide a detailed explanation of our model training process in Section 4.4.
4.3 Anomaly Scoring
To design a robust anomaly scorer that handles time series with missing values, GST-Pro assesses the abnormality at each timestamp without the need of accessing real observed values at the current timestamp. Specifically, at current timestamp, , the only required inputs for the anomaly scorer of GST-Pro are the current and historical forecasts, from forecasting module.
For each channel, we first compute its anomaly likelihood, , of the current forecast by computing negative log-likelihood of the one-step-ahead forecast value after fitting a rolling Gaussian distribution on past and current forecast values:
(7) |
where and represent the mean and standard deviation parameter of our rolling Gaussian distribution:
(8) |
with being the anomaly window size. Following garg2021evaluation , we prepend the last values from the training data to calculate the Gaussian parameters, and , for .
Intuitively, this gives us the anomaly likelihood at the current timestamp by determining how anomalous the current timestamp forecast is compared to the historical forecast of GST-Pro. Hence, the points that are at the tail of the distribution would then have high anomaly scores. To compute the final anomaly score, , for each timestamp, we linearly aggregate the anomaly likelihood for each channel:
(9) |
Anomaly scorer of GST-Pro is similar to the Gaussian scorers in ahmad2017unsupervised ; garg2021evaluation , but differs in that we only take the forecast outputs rather than the reconstruction or forecast error. In fact, state-of-the-art approaches su2019robust ; deng2021graph ; garg2021evaluation primarily rely on observed values to compute anomaly score in the form of reconstruction or forecasting error. While forecasting errors allow simple detection of anomaly events based on the deviation between predicted and real values, the real observations cannot be relied upon safely under an irregular time series scenario as they are constantly missing and inaccessible.
In contrast, our approach relies on the assumption that GST-Pro can predict values that closely resemble forecast values during the non-anomalous period, but generate outputs that degenerate and deviate from normal forecasting outputs during anomalous periods. In other words, as long as there is spatial and/or temporal abnormality in the signals of the input sliding window, GST-Pro should generate forecasts that are also anomalous. Conversely, it should generate forecast values that are similar to forecast outputs made for non-anomalous training data if there are no anomalous signals in the input.
This approach frees us from making any assumptions, which are required to accurately impute the missing values. Rather, we place importance on the forecasting module of GST-Pro to learn the normality of spatial and temporary dependency from training data. To be demonstrated in the next section, we argue that this assumption is not only valid but desirable as we see a minor performance drop under even high missing rate scenarios.
4.4 Model Training
While our anomaly detector does not contain trainable parameters, unsupervised training is required for the proposed DG-NCDE forecasting module. To achieve this, we follow the methodology outlined in choi2022graph by constructing the augmented ODE below instead of individually implementing Eq. 3 and 5:
(10) |
where and are the initial values of the two NCDEs. We use to conduct one-step-ahead forecast: where N is number of channels in multivariate time series. Afterwards, we optimize the entire network using the loss function below:
(11) |
where and are the real and predicted value for node respectively. The denotes whether the real values are not missing at timestamp , where and . The masked training approach prevents GST-Pro to be fitted on potentially noisy imputed values in training data, and forces GST-Pro to learn the multivariate time series representation purely from what is available in the non-missing training observations. Lastly, we can also use different ODE solvers to solve the augmented ODE, including the explicit Euler method, the 4th-order Runge-Kutta method, and the Dormand-Prince method chen2018neural .
5 Experimental Study
In this section, we conduct experiments to explore capabilities of GST-Pro by answering the following questions:
-
1.
Irregular MTS Anomaly Detection Does our framework outperform baseline methods in real-time irregular multivariate time series anomaly detection tasks? More importantly, does the performance remains stable with the increase in missing rates?
-
2.
Regular MTS Anomaly Detection As a control setting, does our framework still outperform baseline methods in real-time regular multivariate time series anomaly detection tasks?
-
3.
Ablation Study What are respective contributions of the specific modules in GST-Pro?
-
4.
Robustness Analysis What extent of missing rate would notably impact GST-Pro’s performance?
5.1 Experimental Settings
In this subsection, we introduce the experimental settings to empirically evaluate our approach against state-of-the-art methods on real-world datasets at increasing missing rates.
5.1.1 Datasets
We evaluate GST-Pro on two widely used realistic datasets for multivariate time series anomaly detection: SWaT and WADI. Both datasets are water treatment physical test-bed systems with simulated attack scenarios based on real-world water treatment plants. The statistics of these datasets are demonstrated in Table 1, and the detailed descriptions are given as follows:
Dataset | channels | train | test | anomalies |
---|---|---|---|---|
SWaT | 51 | 47,515 | 44,986 | 11.97% |
WADI | 127 | 118,795 | 17,275 | 5.99% |
-
1.
SWaTmathur2016swat Secure WAter Treatment dataset has a training set with 7 days of operations that is non-anomalous and 4 days of the test set with multiple realistic simulated attack scenarios. The attacks that are conducted at different intervals in the test set are the positive anomaly labels that represent the attacks, while the rest of the timestamps are labelled as negatives. SWAT is a scaled-down real-world industrial water treatment plant dataset initiated by Singapore’s Public Utility Board, making it a realistic test bed for the empirical evaluation of models.
-
2.
WADIahmed2017wadi WADI extends SWAT by having a larger number of pipelines, storage, and treatment systems. The scale of the WADI better represents a realistic water treatment dataset deng2021graph . The train set has two weeks of non-anomalous data while the test set lasts for 2 days with multiple attacks conducted at different intervals.
Following the implementation of original author deng2021graph , we removed the first 21,600 samples and down-sampled SWaT and WADI to one measurement every 10 seconds by taking the median values. We keep the last 10% of the training data as the validation set. To obtain irregular data with missing values, we randomly generate a masking series, to drop the real observations at a pre-defined missing rate for each channel or node independently. For each channel, its sensing observation at timestamp, , is said to be missing if , and available if .
We generate three settings by randomly dropping 10%, 30%, and 50% of observed values. To ensure reproducibility, the same mask is generated as a pre-processing step on the multivariate time series data, and the same data is used to evaluate the baseline and GST-Pro performances. For each experimental seed, we also generate a new masking series to ensure all models are fairly and comprehensively assessed.
5.1.2 Baselines
As state-of-the-art methods require real observations, we first impute the missing values in the dataset using three standard imputation approaches:
-
1.
Naive Imputation For each sensor, the naive imputation method replaces the missing values in the data using the most recent available value.
-
2.
Mean Imputation For each sensor, the mean imputation method ignores the missing data to compute the mean value in the training data.
-
3.
Cubic Spline Imputation The cubic spline imputation first generates a sliding window input. Then, it performs interpolation within the windows to fill up the missing values by treating the window values as the boundaries of a set of known points to avoid future information leakage.
After imputing missing values, baselines are implemented following the recommendations of original authors.
-
1.
LSTM-VAE park2018multimodal is a reconstruction approach that assesses each timestamp abnormality based on the reconstruction errors. Variational Autoencoder (VAE) is applied to model the underlying probability distribution of the multivariate time series values, while the LSTM module replaces the feed-forward neural networks in the original VAE to capture the temporal depencencies of multivariate time series.
-
2.
OmniAnomaly su2019robust Similar to LTSM-VAE, OmniAnomaly is also a reconstruction approach. Using a stochastic recurrent neural network and planar normalizing flow, OmniAnomaly explicitly models the temporal dependencies to generate the reconstruction probabilities of the current observation. The anomaly score is the posterior reconstruction probability of each input.
-
3.
GDN deng2021graph is an attention-based GNN forecasting method. It learns spatial relationships between multivariate channels for one-step-ahead forecasting and determines anomaly scores based on maximum forecast deviations. The original model needs a full test set for error normalization at each timestamp, making it suitable for aftermath detection rather than real-time detection model. We modify this by using the validation set median, enabling real-time anomaly detection for GDN.
5.1.3 Parameter Settings
The parameterized fully-connection layers for the spatial and temporal process have three hidden layers with the hidden dimension being set as 128. We train our model for 100 epochs with early stopping of 15 epochs. We use a batch size of 64 and Adam optimizer is applied to optimize GST-Pro with learning rate of 0.001, , and weight decay of 0.001. We also clip global norm of the gradient at 5.0. Validation set ratio for SWaT and WADI are fixed at 0.1 across all experiments. We set sliding window length, , to be 5 for SWaT and WADI, as suggested by the original author deng2021graph . The GST-Pro’s forecasting module. Finally, the anomaly scorer only has a single parameter, , which is set to be 50,000 or the maximum timestamps of the dataset for all settings.
5.1.4 Computing Infrastructures
We performed all tests on a personal computer running Ubuntu 20.04, with an NVIDIA Tesla T4 GPU, a 2.20GHz Intel Xeon CPU, and 12.7 GB RAM. We used seeds 1-5 to mask dataset values and for model comparisons across five runs. Given the critical importance of efficiency in real-time anomaly detection, we assessed the average inference times per time point for SWaT and WADI. The results were as follows: GST-Pro required approximately 0.92 milliseconds for SWaT and 1.2 milliseconds for WADI. In comparison, OmniAnomaly recorded times of 0.18 milliseconds for SWaT and 0.25 milliseconds for WADI, LSTM-VAE took 0.2 milliseconds for SWaT and 0.5 milliseconds for WADI, and GDN needed 0.4 milliseconds for SWaT and 0.8 milliseconds for WADI.
5.2 Anomaly Detection Results
As anomaly thresholds can vary based on applications, we follow previous works zong2018deep ; park2018multimodal ; li2021multivariate in measuring anomaly detection performance using scale-invariant metrics that do not require thresholds. For each timestamp, a model has to be correctly labeled the timestamp as anomalous or not deng2021graph . Hence, the closer the ROC and PRC score is to 1, the better a model is at providing a useful anomaly indicator that clearly differentiates every anomalous and non-anomalous time point.
10%, 30% and 50% represent the missing rates of the datasets. AUC values are rescaled to 0-100.
Methods | ROC-AUC | PRC-AUC | |||||
---|---|---|---|---|---|---|---|
10% | 30% | 50% | 10% | 30% | 50% | ||
Naive Imputation | LSTM-VAE | 79.6 0.4 | 79.6 0.4 | 78.9 2.1 | 68.6 0.8 | 67.7 3.6 | 58.9 13.3 |
OmniAnomaly | 82.9 0.4 | 80.0 3.8 | 79.9 4.3 | 73.1 2.6 | 56.7 20.9 | 56.1 24.1 | |
GDN-GNN | 70.8 18.5 | 71.0 17.5 | 69.9 16.6 | 49.3 33.8 | 49.9 34.2 | 48.3 32.5 | |
Mean Imputation | LSTM-VAE | 80.2 1.5 | 79.9 0.4 | 79.9 0.4 | 69.1 0.4 | 67.2 2.1 | 64.8 3.6 |
OmniAnomaly | 81.6 0.8 | 81.6 0.8 | 80.9 0.7 | 67.4 3.5 | 67.8 6.1 | 63.2 5.4 | |
GDN-GNN | 71.7 15.6 | 77.2 11.5 | 76.1 11.4 | 47.7 32.3 | 57.2 25.3 | 45.8 21.6 | |
Cubic Spline Imputation | LTSM-VAE | 79.9 0.3 | 79.4 0.9 | 79.6 0.5 | 68.9 0.4 | 64.7 8.6 | 68.6 0.9 |
OmniAnomaly | 82.1 0.6 | 74.4 0.4 | 74.0 0.4 | 69.6 0.7 | 26.2 2.8 | 26.6 2.4 | |
GDN-GNN | 69.7 17.0 | 68.6 15.5 | 63.5 16.8 | 47.0 31.9 | 48.4 33.0 | 35.5 32.2 | |
GST-Pro | 85.5 0.3 | 86.3 0.3 | 86.2 0.7 | 73.3 0.6 | 69.0 0.4 | 67.0 1.6 |
10%, 30% and 50% represent the missing rates of the dataset. AUC values are rescaled to 0-100.
Methods | ROC-AUC | PRC-AUC | |||||
---|---|---|---|---|---|---|---|
10% | 30% | 50% | 10% | 30% | 50% | ||
Naive Imputation | LSTM-VAE | 50.3 6.5 | 50.2 4.3 | 49.4 3.2 | 14.3 8.8 | 13.4 8.8 | 16.0 3.4 |
OmniAnomaly | 54.7 0.8 | 54.1 0.8 | 54.1 2.5 | 22.1 1.5 | 17.2 5.5 | 19.6 2.7 | |
GDN-GNN | 48.4 1.2 | 48.4 1.5 | 48.1 1.1 | 6.3 1.2 | 5.0 0.1 | 6.0 1.4 | |
Mean Imputation | LSTM-VAE | 47.0 3.8 | 48.5 2.0 | 49.6 1.3 | 7.9 5.3 | 7.2 2.0 | 7.1 2.7 |
OmniAnomaly | 56.3 1.4 | 60.5 0.8 | 63.6 14.0 | 19.8 1.2 | 17.1 1.5 | 14.0 1.2 | |
GDN-GNN | 48.4 0.4 | 48.6 0.6 | 49.5 0.4 | 5.4 0.6 | 5.3 0.1 | 5.4 0.1 | |
Cubic Spline Imputation | LTSM-VAE | 47.9 2.6 | 50.5 2.1 | 49.7 1.9 | 11.9 5.1 | 13.0 5.2 | 11.2 0.4 |
OmniAnomaly | 52.7 1.2 | 53.1 1.2 | 56.0 4.7 | 17.7 0.7 | 15.9 5.1 | 18.3 0.6 | |
GDN-GNN | 48.0 0.5 | 47.1 0.5 | 45.9 0.9 | 5.4 0.5 | 5.3 0.4 | 5.3 0.4 | |
GST-Pro | 73.9 0.6 | 74.2 0.4 | 72.6 0.4 | 37.3 0.2 | 37.0 0.6 | 34.8 0.6 |
5.2.1 Irregular MTS Anomaly Detection
From the ROC and PRC results in Table 2 and 3, we notably observed that:
-
1.
Performance of Proposed Framework GST-Pro outperforms the baselines across all settings, except for SWaT with the missing rate of 50% where GST-Pro closely matches the performance of LSTM-VAE using cubic spline imputation. On average, we outperform the second-best baseline by 13.91% and 46.66% on ROC-AUC and PRC-AUC scores respectively. Notably, we outperform the second-best method, Naive-OmniAnomaly, on WADI’s PRC-AUC results by 68% to 115%.
-
2.
Robustness to Increase in Missing Rates GST-Pro shows not only an overall outperformance but achieves the lowest variability in anomaly detection performance with the change in missing rates. This results in a widening performance gap between GST-Pro and the baselines as the missing rate increases. LSTM-VAE and OmniAnomaly that take the reconstruction approach also achieve fairly stable performances, except for OmniAnomaly using Cubic Spline Imputation. On the other hand, as compared to its performance under regular multivariate time series settings, GDN-GNN shows a significant deterioration in performance under irregular time series settings. As GDN-GNN takes the maximum deviation between the predicted and observed values among the multivariate time series channels, we hypothesize that this mechanism is sensitive to a small noise in the dataset values. This undesirable property is exacerbated under high missing rate scenarios because the imperfection in the missing data imputation can materially disrupt its anomaly detection performances.
-
3.
Modular Pipeline leads to Performance Variability In our analysis of baseline performances, it becomes evident that no single method of data imputation stands out as universally superior. The effectiveness of these methods is inherently tied to (i) the specific characteristics of the datasets and (ii) the type of model employed. For instance, while mean imputation method shows promising results in the SWaT dataset, naive imputation method appears more effective in the context of WADI. Similarly, the performance of LSTM-VAE with cubic spline imputation in SWaT highlights the variability in effectiveness among different combinations of datasets and baseline methods. In contrast, GST-Pro illustrates a more robust and adaptive approach. By learning from non-missing observations in the training data, GST-Pro captured the spatiotemporal dependencies effectively directly from the non-missing data. This process, significantly enhanced by the integration of the NCDE model, steers clear of relying on predefined assumptions about data characteristics. Such an approach not only increases the robustness and accuracy of anomaly detection but also showcases GST-Pro’s adaptability to diverse datasets.
The cornerstone of GST-Pro lies in its adept handling of the complexities inherent in real-world data, a facet where traditional methods, reliant on static impute-then-detect techniques, often falter due to their limited capacity to capture the nature of datasets. Both the baselines and GST-Pro can learn from spatiotemporal patterns. However, only GST-Pro can ensure a nuanced and accurate representation of underlying processes directly from time series that is marred by missing values due to irregular sampling or partial observation.
Here, GST-Pro’s NCDEs module plays a pivotal role, modeling the hidden state dynamics as controlled differential equations, thereby maintaining the temporal data’s integrity and capturing complex, evolving relationships. This unified approach of GST-Pro is not just a feature but a necessity. It addresses the intertwined challenges of imputing missing values, making accurate forecasts and detecting outliers. This strategy is crucial as separating these tasks can lead to significant loss of temporal information and a failure to accurately represent continuous-time processes. Overall, GST-Pro’s unified strategy sets a new benchmark, outperforming methods that treat these problems in isolation.
5.2.2 Regular MTS Anomaly Detection
While we focus on multivariate time series anomaly detection with missing values, it is equally important that GST-Pro can perform well under regular multivariate time series anomaly detection. This empirically evaluates if GST-Pro can capture the spatiotemporal dependencies of the data, and hence output anomaly indicators that can alert operators that anomalous events have (or have not) occurred. More importantly, it also stress tests the validity of our assumption that current timestamp values are not required to score anomalies.
As shown in Figure 3, GST-Pro outperforms the baselines under the regular setting, suggesting computing forecasting or reconstruction errors are not required to detect anomalies. This approach deviates from recent state-of-the-art models garg2021evaluation ; darban2022deep ; schmidl2022anomaly , and illustrates an effective alternative to detecting anomalies for multivariate time series data. In the irregular settings, we have also demonstrated that this is a pragmatic approach as GST-Pro only requires some real observations in the sliding window to make a one-step-ahead forecast and no real observation at the current timestamp to detect anomalies accurately under high missing rate settings.
5.3 Ablation Study
We conduct an ablation study on SWaT (Table 4) and WADI (Table 5) with 0% and 50% missing rate to validate how various modules of GST-Pro contribute to its irregular multivariate time series anomaly detection performance. We modify two major modules of GST-Pro, the forecasting module (FM) and the Anomaly Detection Module (AD). For the FM module, we implement made the following modifications:
-
1.
w/o SP GST-Pro without the Spatial Process module is achieved by removing graph convolution layer and the learned adjacency matrix.
-
2.
w/o TP GST-Pro without the Temporal Process module entirely relies on the Spatial process module to implicitly model the temporal dependencies.
For AD module modifications, we keep the forecasting module fixed and replace the gaussian scorer with the Principal component analysis (PCA) scorer, or a Kmeans scorer. The former represents a reconstruction-based approach while the latter represents a distance-based approach. For both scorers, we normalize the forecast deviation and forecast values using the median and inter-quartile range deng2021graph . This is to dampen the small spikes in forecast values even when the system behavior is not anomalous. The scorers are fitted on the validation set and evaluated on the test set:
-
1.
ReconPCA For each timestamp with non-missing observations, we initially compute the forecasting deviation in order to lessen the disparity in characteristics among the variable channels. Following this, we implement Probabilistic PCA tipping1999probabilistic to calculate the average reconstruction errors of the non-missing forecasting deviations for each timestamp. The average reconstruction error at each timestamp is treated as the indicator for anomalies.
-
2.
DistKmeans For the validation set, we first apply Kmeans to generate multiple clusters. The number of cluster, K, is determined by using Silhouette scorerousseeuw1987silhouettes and we search K from 0 to 20. Following this, we calculate the distance between forecast and the centroid of its closest corresponding cluster. The computed distance is used as the anomaly score for detecting anomalies.
Methods | ROC-AUC | PRC-AUC | |||
---|---|---|---|---|---|
0% | 50% | 0% | 50% | ||
GST-Pro | 85.54 | 86.21 | 73.31 | 66.96 | |
FM | w/o SP | 83.70 | 82.74 | 63.76 | 55.83 |
w/o TP | 83.55 | 81.52 | 63.71 | 54.64 | |
AD | ReconPCA | 83.29 | 79.11 | 59.69 | 44.35 |
DistKmeans | 77.88 | 77.03 | 67.36 | 65.61 |
Methods | ROC-AUC | PRC-AUC | |||
---|---|---|---|---|---|
0% | 50% | 0% | 50% | ||
GST-Pro | 73.34 | 72.64 | 37.21 | 34.80 | |
FM | w/o SP | 73.29 | 71.26 | 27.02 | 25.82 |
w/o TP | 73.54 | 71.22 | 27.08 | 25.64 | |
AD | ReconPCA | 64.83 | 62.14 | 19.79 | 15.74 |
DistKmeans | 53.68 | 54.86 | 16.72 | 14.77 |
From Tables 4 and 5, we observe a significant drop PRC-AUC performance upon removal of the spatial or temporal module. This strongly supports our hypothesis that modeling spatial-temporal dependencies is critical for multivariate anomaly detection. Secondly, the PCA reconstruction approach shows a greater decline in performance when the multivariate time series encounters irregularity. As we argued earlier, we propose that dependence on timepoint-detection approaches can result in unstable performance due to inherent noise, unpredictability, and unreliability of real observed values during anomalous periods. Lastly, Kmeans fails to yield satisfactory results as it treats each multivariate observation as an independent sample instead of a sequence in a temporal series.
5.4 Robustness Analysis
In this subsection, we delve into how GST-Pro copes with varying levels of missing data, ranging from 10% to 90%. The analysis, as illustrated in Figure 4, showcases the model’s performance in terms of ROC-AUC and PRC-AUC on the SWaT and WADI datasets under these varying conditions. Remarkably, GST-Pro maintains its state-of-the-art performance even at high missing rates, achieving a ROC-AUC of 0.856 on SWaT with a 70% missing rate and 0.629 on WADI with a 90% missing rate. This performance is particularly notable since GST-Pro still outperforms the best-performing baseline models in a regular MTS Anomaly Detection setting, even under these high missing rate scenarios.
Overall, these results indicate that GST-Pro’s performance begins to be notably impacted only at extremely high missing rates. It is specifically beyond the 70% threshold where the performance decrement for GST-Pro becomes more pronounced. Intriguingly, GST-Pro can even outperform the competing baselines in regular scenarios when under extreme missing value conditions. This analysis not only underscores GST-Pro’s resilience in handling incomplete data but also assists in determining the thresholds beyond which missing data significantly affects its accuracy and reliability.
Moving forward, we intend to further advancing the scalability of GST-Pro, ensuring its robust applicability in real-world scenarios. Our ongoing efforts will concentrate on optimizing the model’s architecture and algorithms to efficiently handle even larger datasets, a crucial step towards broadening its practical utility. Particularly, we seek to leverage the ability of NCDE chen2018neural in GST-Pro model for its capacity to use memory-efficient training with adjoint-based backpropagation kidger2020neural . This approach is akin to that used in invertible networks behrmann2019invertible . This progression should enable GST-Pro to learn from diverse data patterns and structures, effectively handling the intricacies of larger and more diverse datasets.
Additionally, our goal is to expand GST-Pro’s applicability across a wider spectrum of uses by tackling concept drift. We seek to incorporate methods to identify and measure data drift. This will enable timely modifications to the model to align with changing data distributions webb2016characterizing ; goldenberg2020pca . This enhancement not only aims to improve the model’s adaptability but also ensures its relevance in dynamic data environments.
6 Conclusion
In this work, we propose a novel framework to address irregular multivariate time series anomaly detection. Our model, GST-Pro, robustly detects anomalies in real-time even when current observations are completely absent. Experiments on real-world datasets showed that GST-Pro not only outperformed state-of-the-art baselines in regular multivariate time series anomaly detection settings but also in irregular multivariate time series with high missing rate scenarios, paving the way for deep STGNN methods to be implemented in real-world applications. In the future, we will exploit the integration of graphs and large language models (LLMs) pan2023integrating ; pan2023unifying ; luo2023reasoning for effective representation learning for time series; alternatively, we will directly reprogram LLMs for time series anomaly detection jin2023time ; jin2023large .
References
- (1) C. Lee, Z. Luo, K. Y. Ngiam, M. Zhang, K. Zheng, G. Chen, B. C. Ooi, W. L. J. Yip, Big healthcare data analytics: Challenges and applications, Handbook of large-scale distributed computing in smart healthcare (2017) 11–41.
- (2) Y. Su, Y. Zhao, C. Niu, R. Liu, W. Sun, D. Pei, Robust anomaly detection for multivariate time series through stochastic recurrent neural network, in: KDD, 2019, pp. 2828–2837.
- (3) Z. Li, Y. Zhao, J. Han, Y. Su, R. Jiao, X. Wen, D. Pei, Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding, in: KDD, 2021, pp. 3220–3230.
- (4) A. P. Mathur, N. O. Tippenhauer, Swat: A water treatment testbed for research and training on ics security, in: 2016 International Workshop on Cyber-physical Systems for Smart Water Networks, IEEE, 2016, pp. 31–36.
- (5) K. Hundman, V. Constantinou, C. Laporte, I. Colwell, T. Soderstrom, Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding, in: KDD, 2018, pp. 387–395.
- (6) A. Garg, W. Zhang, J. Samaran, R. Savitha, C.-S. Foo, An evaluation of anomaly detection and diagnosis in multivariate time series, IEEE TNNLS (2021).
- (7) S. Schmidl, P. Wenig, T. Papenbrock, Anomaly detection in time series: a comprehensive evaluation, Proceedings of the VLDB Endowment 15 (9) (2022) 1779–1797.
- (8) Z. Z. Darban, G. I. Webb, S. Pan, C. C. Aggarwal, M. Salehi, Deep learning for time series anomaly detection: A survey, arXiv preprint arXiv:2211.05244 (2022).
- (9) Q. Yu, L. Jibin, L. Jiang, An improved arima-based traffic anomaly detection algorithm for wireless sensor networks, International Journal of Distributed Sensor Networks 12 (1) (2016) 9653230.
- (10) E. Keogh, J. Lin, A. Fu, Hot sax: Efficiently finding the most unusual time series subsequence, in: ICDM, Ieee, 2005, pp. 8–pp.
- (11) K. M. Ting, B.-C. Xu, T. Washio, Z.-H. Zhou, Isolation distributional kernel a new tool for point & group anomaly detection, IEEE TKDE (2021).
- (12) D. Park, Y. Hoshi, C. C. Kemp, A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder, IEEE Robotics and Automation Letters 3 (3) (2018) 1544–1551.
- (13) H. Zhao, Y. Wang, J. Duan, C. Huang, D. Cao, Y. Tong, B. Xu, J. Bai, J. Tong, Q. Zhang, Multivariate time-series anomaly detection via graph attention network, in: ICDM, IEEE, 2020, pp. 841–850.
- (14) A. Deng, B. Hooi, Graph neural network-based anomaly detection in multivariate time series, AAAI 35 (5) (2021) 4027–4035.
- (15) S. Han, S. S. Woo, Learning sparse latent graph representations for anomaly detection in multivariate time series, in: KDD, 2022, pp. 2977–2986.
- (16) R. J. Little, D. B. Rubin, Statistical analysis with missing data, Vol. 793, John Wiley & Sons, 2019.
- (17) R. Mitra, S. F. McGough, T. Chakraborti, C. Holmes, R. Copping, N. Hagenbuch, S. Biedermann, J. Noonan, B. Lehmann, A. Shenvi, et al., Learning from data with structured missingness, Nature Machine Intelligence 5 (1) (2023) 13–23.
- (18) L. Beretta, A. Santaniello, Nearest neighbor imputation algorithms: a critical evaluation, BMC medical informatics and decision making 16 (3) (2016) 197–208.
- (19) J. Durbin, S. J. Koopman, Time series analysis by state space methods, Vol. 38, OUP Oxford, 2012.
- (20) S. Ahmad, A. Lavin, S. Purdy, Z. Agha, Unsupervised real-time anomaly detection for streaming data, Neurocomputing 262 (2017) 134–147.
- (21) H. Song, D. Rajan, J. Thiagarajan, A. Spanias, Attend and diagnose: Clinical time series analysis using attention models, AAAI 32 (1) (2018).
- (22) Z. Chen, D. Chen, X. Zhang, Z. Yuan, X. Cheng, Learning graph structures with transformer for multivariate time-series anomaly detection in iot, IEEE IoT Journal 9 (12) (2021) 9179–9189.
- (23) C. Zhang, D. Song, Y. Chen, X. Feng, C. Lumezanu, W. Cheng, J. Ni, B. Zong, H. Chen, N. V. Chawla, A deep neural network for unsupervised anomaly detection and diagnosis in multivariate time series data, AAAI 33 (01) (2019) 1409–1416.
- (24) E. Dai, J. Chen, Graph-augmented normalizing flows for anomaly detection of multiple time series, ICLR (2022).
- (25) M. M. Breunig, H.-P. Kriegel, R. T. Ng, J. Sander, Lof: identifying density-based local outliers, in: KDD, 2000, pp. 93–104.
- (26) K. M. Ting, Z. Liu, H. Zhang, Y. Zhu, A new distributional treatment for time series and an anomaly detection investigation, Proceedings of the VLDB Endowment 15 (11) (2022) 2321–2333.
- (27) L. Shen, Z. Li, J. Kwok, Timeseries anomaly detection using temporal hierarchical one-class network, in: NeurIPS, Vol. 33, 2020, pp. 13016–13026.
- (28) Y. Shin, S. Lee, S. Tariq, M. S. Lee, O. Jung, D. Chung, S. S. Woo, Itad: integrative tensor-based anomaly detection system for reducing false positives of satellite systems, in: CIKM, 2020, pp. 2733–2740.
- (29) H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, J. Pei, Trustworthy graph neural networks: Aspects, methods and trends, arXiv preprint arXiv:2205.07424 (2022).
- (30) H. Zhang, B. Wu, S. Wang, X. Yang, M. Xue, S. Pan, X. Yuan, Demystifying uneven vulnerability of link stealing attacks against graph neural networks, in: International Conference on Machine Learning, PMLR, 2023, pp. 41737–41752.
- (31) H. Zhang, X. Yuan, Q. V. H. Nguyen, S. Pan, On the interaction between node fairness and edge privacy in graph neural networks, arXiv preprint arXiv:2301.12951 (2023).
- (32) H. Y. Koh, A. T. Nguyen, S. Pan, L. T. May, G. I. Webb, Psichic: physicochemical graph neural network for learning protein-ligand interaction fingerprints from sequence data, bioRxiv (2023) 2023–09.
- (33) A. T. N. Nguyen, D. T. N. Nguyen, H. Y. Koh, J. Toskov, W. MacLean, A. Xu, D. Zhang, G. I. Webb, L. T. May, M. L. Halls, The application of artificial intelligence to accelerate g protein-coupled receptor drug discovery, British Journal of Pharmacology (2023).
- (34) Y. Zheng, H. Y. Koh, J. Ju, A. T. Nguyen, L. T. May, G. I. Webb, S. Pan, Large language models for scientific synthesis, inference and explanation, arXiv preprint arXiv:2310.07984 (2023).
- (35) H. Y. Koh, J. Ju, M. Liu, S. Pan, An empirical survey on long document summarization: Datasets, models, and metrics, ACM computing surveys 55 (8) (2022) 1–35.
- (36) H. Y. Koh, J. Ju, H. Zhang, M. Liu, S. Pan, How far are we from robust long abstractive summarization?, in: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 2682–2698.
- (37) Y. Seo, M. Defferrard, P. Vandergheynst, X. Bresson, Structured sequence modeling with graph convolutional recurrent networks, in: ICONIP, Springer, 2018, pp. 362–373.
- (38) M. Jin, H. Y. Koh, Q. Wen, D. Zambon, C. Alippi, G. I. Webb, I. King, S. Pan, A survey on graph neural networks for time series: Forecasting, classification, imputation, and anomaly detection, arXiv preprint arXiv:2307.03759 (2023).
- (39) Z. Wu, S. Pan, G. Long, J. Jiang, X. Chang, C. Zhang, Connecting the dots: Multivariate time series forecasting with graph neural networks, in: KDD, 2020, pp. 753–763.
- (40) Y. Zheng, H. Y. Koh, M. Jin, L. Chi, K. T. Phan, S. Pan, Y.-P. P. Chen, W. Xiang, Correlation-aware spatial–temporal graph learning for multivariate time-series anomaly detection, IEEE Transactions on Neural Networks and Learning Systems (2023).
- (41) M. Jin, Y. Zheng, Y.-F. Li, S. Chen, B. Yang, S. Pan, Multivariate time series forecasting with dynamic graph neural odes, IEEE TKDE (2022).
- (42) Z. Duan, H. Xu, Y. Wang, Y. Huang, A. Ren, Z. Xu, Y. Sun, W. Wang, Multivariate time-series classification with hierarchical variational graph pooling, Neural Networks 154 (2022) 481–490.
- (43) D. B. Rubin, Inference and missing data, Biometrika 63 (3) (1976) 581–592.
- (44) J. Choi, H. Choi, J. Hwang, N. Park, Graph neural controlled differential equations for traffic forecasting, AAAI 36 (6) (2022) 6367–6374.
- (45) P. Kidger, J. Morrill, J. Foster, T. Lyons, Neural controlled differential equations for irregular time series, in: NeurIPS, Vol. 33, 2020, pp. 6696–6707.
- (46) R. T. Chen, Y. Rubanova, J. Bettencourt, D. K. Duvenaud, Neural ordinary differential equations, NeurIPS 31 (2018).
- (47) A. Sankar, Y. Wu, L. Gou, W. Zhang, H. Yang, Dysat: Deep neural representation learning on dynamic graphs via self-attention networks, in: WSDM, 2020, pp. 519–527.
- (48) S. McKinley, M. Levine, Cubic spline interpolation, College of the Redwoods 45 (1) (1998) 1049–1060.
- (49) T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, ICLR (2017).
- (50) C. M. Ahmed, V. R. Palleti, A. P. Mathur, Wadi: a water distribution testbed for research in the design of secure cyber physical systems, in: CySWater, 2017, pp. 25–28.
- (51) B. Zong, Q. Song, M. R. Min, W. Cheng, C. Lumezanu, D. Cho, H. Chen, Deep autoencoding gaussian mixture model for unsupervised anomaly detection, ICLR (2018).
- (52) M. E. Tipping, C. M. Bishop, Probabilistic principal component analysis, Journal of the Royal Statistical Society: Series B (Statistical Methodology) 61 (3) (1999) 611–622.
- (53) P. J. Rousseeuw, Silhouettes: a graphical aid to the interpretation and validation of cluster analysis, Journal of computational and applied mathematics 20 (1987) 53–65.
- (54) J. Behrmann, W. Grathwohl, R. T. Chen, D. Duvenaud, J.-H. Jacobsen, Invertible residual networks, in: International conference on machine learning, PMLR, 2019, pp. 573–582.
- (55) G. I. Webb, R. Hyde, H. Cao, H. L. Nguyen, F. Petitjean, Characterizing concept drift, DMKD 30 (4) (2016) 964–994.
- (56) I. Goldenberg, G. I. Webb, Pca-based drift and shift quantification framework for multidimensional data, Knowledge and Information Systems 62 (7) (2020) 2835–2854.
- (57) S. Pan, Y. Zheng, Y. Liu, Integrating graphs with large language models: Methods and prospects, arXiv preprint arXiv:2310.05499 (2023).
- (58) S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, X. Wu, Unifying large language models and knowledge graphs: A roadmap, arXiv preprint arXiv:2306.08302 (2023).
- (59) L. Luo, Y.-F. Li, G. Haffari, S. Pan, Reasoning on graphs: Faithful and interpretable large language model reasoning, arXiv preprint arXiv:2310.01061 (2023).
- (60) M. Jin, S. Wang, L. Ma, Z. Chu, J. Y. Zhang, X. Shi, P.-Y. Chen, Y. Liang, Y.-F. Li, S. Pan, et al., Time-llm: Time series forecasting by reprogramming large language models, arXiv preprint arXiv:2310.01728 (2023).
- (61) M. Jin, Q. Wen, Y. Liang, C. Zhang, S. Xue, X. Wang, J. Zhang, Y. Wang, H. Chen, X. Li, et al., Large models for time series and spatio-temporal data: A survey and outlook, arXiv preprint arXiv:2310.10196 (2023).