Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

TimeCLR: : A self-supervised contrastive learning framework for univariate time series representation

Published: 07 June 2022 Publication History

Abstract

Time series are usually rarely or sparsely labeled, which limits the performance of deep learning models. Self-supervised representation learning can reduce the reliance of deep learning models on labeled data by extracting structure and feature information from unlabeled data and improve model performance when labeled data is insufficient. Although SimCLR has achieved impressive success in the computer vision field, direct applying SimCLR to time series field usually performs poorly due to the part of data augmentation and the part of feature extractor not being adapted to the temporal dependencies within the time series data. In order to obtain high-quality time series representations, we propose TimeCLR, a framework which is suitable for univariate time series representation, by combining the advantages of DTW and InceptionTime. Inspired by the DTW-based k-nearest neighbor classifier, we first propose the DTW data augmentation that can generate DTW-targeted phase shift and amplitude change phenomena and retain time series structure and feature information. Inspired by the current state-of-the-art deep learning-based time series classification method, InceptionTime, which has good feature extraction capabilities, we designed a feature extractor capable of generating representations in an end-to-end manner. Finally, combining the advantages of DTW data augmentation and InceptionTime, our proposed TimeCLR method successfully extends SimCLR and applies it to the time series field. We designed a variety of experiments and performed careful ablation studies. Experimental results show that our proposed TimeCLR method can not only achieve comparable performance to supervised InceptionTime on multiple tasks, but also produce better performance than supervised learning models in the case of insufficient labeled data, and can be flexibly applied to univariate time series data from different domains.

Highlights

We propose a novel method for time series data augmentation named DTW data augmentation that not only generates phase shifts and amplitude changes, but also retains the structure and feature information of the time series.
We design a feature extractor that can generate time series representations in an end-to-end manner, drawing on the advantages of InceptionTime, the current state-of-the-art deep learning-based time series classification method.
Combining the advantages of DTW data augmentation and InceptionTime model, we successfully extend SimCLR to the time series field.

References

[1]
Fawaz H.I., Forestier G., Weber J., Idoumghar L., Muller P.-A., Deep learning for time series classification: A review, Data Min. Knowl. Discov. 33 (4) (2019) 917–963.
[2]
Lim B., Zohren S., Time-series forecasting with deep learning: A survey, Phil. Trans. R. Soc. A 379 (2194) (2021).
[3]
Ching T., Himmelstein D.S., Beaulieu-Jones B.K., Kalinin A.A., Do B.T., Way G.P., Ferrero E., Agapow P.-M., Zietz M., Hoffman M.M., et al., Opportunities and obstacles for deep learning in biology and medicine, J. R. Soc. Interface 15 (141) (2018).
[4]
Chen T., Kornblith S., Norouzi M., Hinton G., A simple framework for contrastive learning of visual representations, in: International Conference on Machine Learning, PMLR, 2020, pp. 1597–1607.
[5]
Chen T., Kornblith S., Swersky K., Norouzi M., Hinton G.E., Big self-supervised models are strong semi-supervised learners, in: Larochelle H., Ranzato M., Hadsell R., Balcan M.F., Lin H. (Eds.), Advances in Neural Information Processing Systems, 33, Curran Associates, Inc., 2020, pp. 22243–22255. https://proceedings.neurips.cc/paper/2020/file/fcbc95ccdd551da181207c0c1400c655-Paper.pdf.
[6]
Noroozi M., Favaro P., Unsupervised learning of visual representations by solving jigsaw puzzles, in: European Conference on Computer Vision, Springer, 2016, pp. 69–84.
[7]
Z. Wu, Y. Xiong, S.X. Yu, D. Lin, Unsupervised feature learning via non-parametric instance discrimination, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3733–3742.
[8]
K. He, H. Fan, Y. Wu, S. Xie, R. Girshick, Momentum contrast for unsupervised visual representation learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9729–9738.
[9]
Hjelm R.D., Fedorov A., Lavoie-Marchildon S., Grewal K., Bachman P., Trischler A., Bengio Y., Learning deep representations by mutual information estimation and maximization, 2018, arXiv preprint arXiv:1808.06670.
[10]
Mohsenvand M.N., Izadi M.R., Maes P., Contrastive representation learning for electroencephalogram classification, in: Machine Learning for Health, PMLR, 2020, pp. 238–253.
[11]
D.J. Berndt, J. Clifford, Using dynamic time warping to find patterns in time series, in: KDD Workshop, vol. 10, Seattle, WA, USA, 1994, pp. 359–370.
[12]
Müller M., Dynamic time warping, Inf. Retr. Music Motion (2007) 69–84.
[13]
Jeong Y.-S., Jeong M.K., Omitaomu O.A., Weighted dynamic time warping for time series classification, Pattern Recognit. 44 (9) (2011) 2231–2240.
[14]
Wang Z., Yan W., Oates T., Time series classification from scratch with deep neural networks: A strong baseline, in: 2017 International Joint Conference on Neural Networks, IJCNN, IEEE, 2017, pp. 1578–1585.
[15]
Bagnall A., Lines J., Bostrom A., Large J., Keogh E., The great time series classification bake off: A review and experimental evaluation of recent algorithmic advances, Data Min. Knowl. Discov. 31 (3) (2017) 606–660.
[16]
Fawaz H.I., Lucas B., Forestier G., Pelletier C., Schmidt D.F., Weber J., Webb G.I., Idoumghar L., Muller P.-A., Petitjean F., Inceptiontime: Finding alexnet for time series classification, Data Min. Knowl. Discov. 34 (6) (2020) 1936–1962.
[17]
L. Ye, E. Keogh, Time series shapelets: A new primitive for data mining, in: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2009, pp. 947–956.
[18]
A. Mueen, E. Keogh, N. Young, Logical-shapelets: An expressive primitive for time series classification, in: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2011, pp. 1154–1162.
[19]
Hills J., Lines J., Baranauskas E., Mapp J., Bagnall A., Classification of time series by shapelet transformation, Data Min. Knowl. Discov. 28 (4) (2014) 851–881.
[20]
Górecki T., Łuczak M., Non-isometric transforms in time series classification using DTW, Knowl.-Based Syst. 61 (2014) 98–108.
[21]
Cai X., Xu T., Yi J., Huang J., Rajasekaran S., DTWNet: A dynamic time warping network, in: Wallach H., Larochelle H., Beygelzimer A., d’Alché-Buc F., Fox E., Garnett R. (Eds.), Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019, https://proceedings.neurips.cc/paper/2019/file/02f063c236c7eef66324b432b748d15d-Paper.pdf.
[22]
Karim F., Majumdar S., Darabi H., Chen S., LSTM fully convolutional networks for time series classification, IEEE Access 6 (2017) 1662–1669.
[23]
T.T. Um, F.M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, D. Kulić, Data augmentation of wearable sensor data for Parkinson’s disease monitoring using convolutional neural networks, in: Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017, pp. 216–220.
[24]
Haradal S., Hayashi H., Uchida S., Biosignal data augmentation based on generative adversarial networks, in: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, IEEE, 2018, pp. 368–371.
[25]
Ramponi G., Protopapas P., Brambilla M., Janssen R., T-cgan: Conditional generative adversarial network for data augmentation in noisy time series with irregular sampling, 2018, arXiv preprint arXiv:1811.08295.
[26]
Cao P., Li X., Mao K., Lu F., Ning G., Fang L., Pan Q., A novel data augmentation method to enhance deep neural networks for detection of atrial fibrillation, Biomed. Signal Process. Control 56 (2020).
[27]
Kamycki K., Kapuscinski T., Oszust M., Data augmentation with suboptimal warping for time-series classification, Sensors 20 (1) (2020) 98.
[28]
DeVries T., Taylor G.W., Dataset augmentation in feature space, 2017, arXiv preprint arXiv:1702.05538.
[29]
N. Komodakis, S. Gidaris, Unsupervised representation learning by predicting image rotations, in: International Conference on Learning Representations, ICLR, 2018.
[30]
Wang J., Jiao J., Liu Y.-H., Self-supervised video representation learning by pace prediction, in: European Conference on Computer Vision, Springer, 2020, pp. 504–521.
[31]
Hyvarinen A., Morioka H., Unsupervised feature extraction by time-contrastive learning and nonlinear ica, Adv. Neural Inf. Process. Syst. 29 (2016) 3765–3773.
[32]
Franceschi J.-Y., Dieuleveut A., Jaggi M., Unsupervised scalable representation learning for multivariate time series, in: Wallach H., Larochelle H., Beygelzimer A., d’Alché-Buc F., Fox E., Garnett R. (Eds.), Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019, https://proceedings.neurips.cc/paper/2019/file/53c6de78244e9f528eb3e1cda69699bb-Paper.pdf.
[33]
Jawed S., Grabocka J., Schmidt-Thieme L., Self-supervised learning for semi-supervised time series classification, Adv. Knowl. Discov. Data Min. 12084 (2020) 499.
[34]
S. Tonekaboni, D. Eytan, A. Goldenberg, Unsupervised representation learning for time series with temporal neighborhood coding, in: International Conference on Learning Representations, 2021.
[35]
Fan H., Zhang F., Gao Y., Self-supervised time series representation learning by inter-intra relational reasoning, 2020, arXiv preprint arXiv:2011.13548.
[36]
Eldele E., Ragab M., Chen Z., Wu M., Kwoh C.K., Li X., Guan C., Time-series representation learning via temporal and contextual contrasting, in: Zhou Z.-H. (Ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, International Joint Conferences on Artificial Intelligence Organization, 2021, pp. 2352–2359,. Main Track.
[37]
Anand G., Nayak R., Unsupervised visual time-series representation learning and clustering, in: International Conference on Neural Information Processing, Springer, 2020, pp. 832–840.
[38]
A. Abid, J. Zou, Autowarp: Learning a warping distance from unlabeled time using sequence autoencoders, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, pp. 10568–10578.
[39]
Sarkar P., Etemad A., Self-supervised ECG representation learning for emotion recognition, IEEE Trans. Affect. Comput. (2020).
[40]
A. Mueen, E. Keogh, Extracting optimal performance from dynamic time warping, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 2129–2130.
[41]
Sohn K., Improved deep metric learning with multi-class n-pair loss objective, Adv. Neural Inf. Process. Syst. (2016) 1857–1865.
[42]
Oord A.v.d., Li Y., Vinyals O., Representation learning with contrastive predictive coding, 2018, arXiv preprint arXiv:1807.03748.
[43]
Mohsenvand M.N., Izadi M.R., Maes P., Contrastive representation learning for electroencephalogram classification, in: Machine Learning for Health, PMLR, 2020, pp. 238–253.
[44]
Davis L.M., Predictive Modelling of Bone Ageing, (Ph.D. thesis) University of East Anglia, 2013.
[45]
Cao F., Huang H.K., Pietka E., Gilsanz V., Digital hand atlas and web-based bone age assessment: System design and implementation, Comput. Med. Imaging Graph. 24 (5) (2000) 297–307.
[46]
Petitjean F., Ketterlin A., Gançarski P., A global averaging method for dynamic time warping, with applications to clustering, Pattern Recognit. 44 (3) (2011) 678–693.
[47]
Davies D.L., Bouldin D.W., A cluster separation measure, IEEE Trans. Pattern Anal. Mach. Intell. (2) (1979) 224–227.
[48]
Rousseeuw P.J., Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, J. Comput. Appl. Math. 20 (1987) 53–65.

Cited By

View all
  • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
  • (2024)Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting MaskProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671673(2560-2571)Online publication date: 25-Aug-2024
  • (2024)Time-Series Representation Learning via Dual Reference ContrastingProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679699(3042-3051)Online publication date: 21-Oct-2024
  • Show More Cited By

Index Terms

  1. TimeCLR: A self-supervised contrastive learning framework for univariate time series representation
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Knowledge-Based Systems
          Knowledge-Based Systems  Volume 245, Issue C
          Jun 2022
          433 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 07 June 2022

          Author Tags

          1. Univariate Time series
          2. Representation learning
          3. Self-supervised learning
          4. Contrastive learning
          5. Data augmentation

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 06 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
          • (2024)Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting MaskProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671673(2560-2571)Online publication date: 25-Aug-2024
          • (2024)Time-Series Representation Learning via Dual Reference ContrastingProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679699(3042-3051)Online publication date: 21-Oct-2024
          • (2024)Bidirectional consistency with temporal-aware for semi-supervised time series classificationNeural Networks10.1016/j.neunet.2024.106709180:COnline publication date: 1-Dec-2024
          • (2024)A robust multi-scale feature extraction framework with dual memory module for multivariate time series anomaly detectionNeural Networks10.1016/j.neunet.2024.106395177:COnline publication date: 1-Sep-2024
          • (2024)A filter-augmented auto-encoder with learnable normalization for robust multivariate time series anomaly detectionNeural Networks10.1016/j.neunet.2023.11.047170:C(478-493)Online publication date: 12-Apr-2024
          • (2024)TS-TFSIAMKnowledge-Based Systems10.1016/j.knosys.2024.111472288:COnline publication date: 15-Mar-2024
          • (2024)An adversarial contrastive autoencoder for robust multivariate time series anomaly detectionExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.123010245:COnline publication date: 2-Jul-2024
          • (2024)A cross-layered cluster embedding learning network with regularization for multivariate time series anomaly detectionThe Journal of Supercomputing10.1007/s11227-023-05833-980:8(10444-10468)Online publication date: 1-May-2024
          • (2024)Contrastive-based YOLOv7 for personal protective equipment detectionNeural Computing and Applications10.1007/s00521-023-09212-636:5(2445-2457)Online publication date: 1-Feb-2024
          • Show More Cited By

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media