Tranad: Deep transformer networks for anomaly detection in multivariate time series data

S Tuli, G Casale, NR Jennings - arXiv preprint arXiv:2201.07284, 2022 - arxiv.org
arXiv preprint arXiv:2201.07284, 2022arxiv.org
Efficient anomaly detection and diagnosis in multivariate time-series data is of great
importance for modern industrial applications. However, building a system that is able to
quickly and accurately pinpoint anomalous observations is a challenging problem. This is
due to the lack of anomaly labels, high data volatility and the demands of ultra-low inference
times in modern applications. Despite the recent developments of deep learning
approaches for anomaly detection, only a few of them can address all of these challenges. In …
Efficient anomaly detection and diagnosis in multivariate time-series data is of great importance for modern industrial applications. However, building a system that is able to quickly and accurately pinpoint anomalous observations is a challenging problem. This is due to the lack of anomaly labels, high data volatility and the demands of ultra-low inference times in modern applications. Despite the recent developments of deep learning approaches for anomaly detection, only a few of them can address all of these challenges. In this paper, we propose TranAD, a deep transformer network based anomaly detection and diagnosis model which uses attention-based sequence encoders to swiftly perform inference with the knowledge of the broader temporal trends in the data. TranAD uses focus score-based self-conditioning to enable robust multi-modal feature extraction and adversarial training to gain stability. Additionally, model-agnostic meta learning (MAML) allows us to train the model using limited data. Extensive empirical studies on six publicly available datasets demonstrate that TranAD can outperform state-of-the-art baseline methods in detection and diagnosis performance with data and time-efficient training. Specifically, TranAD increases F1 scores by up to 17%, reducing training times by up to 99% compared to the baselines.
arxiv.org