Abstract
A large-scale labeled dataset is a key factor for the success of supervised deep learning in histopathological image analysis. However, exhaustive annotation requires a careful visual inspection by pathologists, which is extremely time-consuming and labor-intensive. Self-supervised learning (SSL) can alleviate this issue by pre-training models under the supervision of data itself, which generalizes well to various downstream tasks with limited annotations. In this work, we propose a hybrid model (TransPath) which is pre-trained in an SSL manner on massively unlabeled histopathological images to discover the inherent image property and capture domain-specific feature embedding. The TransPath can serve as a collaborative local-global feature extractor, which is designed by combining a convolutional neural network (CNN) and a modified transformer architecture. We propose a token-aggregating and excitation (TAE) module which is placed behind the self-attention of the transformer encoder for capturing more global information. We evaluate the performance of pre-trained TransPath by fine-tuning it on three downstream histopathological image classification tasks. Our experimental results indicate that TransPath outperforms state-of-the-art vision transformer networks, and the visual representations generated by SSL on domain-relevant histopathological images are more transferable than the supervised baseline on ImageNet. Our code and pre-trained models will be available at https://github.com/Xiyue-Wang/TransPath.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bejnordi, B.E., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318(22), 2199–2210 (2017)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Dehaene, O., Camara, A., Moindrot, O., de Lavergne, A., Courtiol, P.: Self-supervision closes the gap between weak and strong supervision in histology. arXiv preprint arXiv:2012.03583 (2020)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Grill, J.B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
Kather, J.N., et al.: Predicting survival from colorectal cancer histology slides using deep learning: a retrospective multicenter study. PLoS Med. 16(1), 1–22 (2019)
Koohbanani, N.A., Unnikrishnan, B., Khurram, S.A., Krishnaswamy, P., Rajpoot, N.: Self-path: self-supervision for classification of pathology images with limited annotations. arXiv preprint arXiv:2008.05571 (2020)
Li, B., Li, Y., Eliceiri, K.W.: Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. arXiv preprint arXiv:2011.08939 (2020)
Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)
Lu, M.Y., Chen, R.J., Wang, J., Dillon, D., Mahmood, F.: Semi-supervised histology classification using deep multiple instance learning and contrastive predictive coding. arXiv preprint arXiv:1910.10825 (2019)
Mormont, R., Geurts, P., Marée, R.: Multi-task pre-training of deep neural networks for digital pathology. IEEE J. Biomed. Health Inform. 25(2), 412–421 (2020)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)
Srinidhi, C.L., Kim, S.W., Chen, F.D., Martel, A.L.: Self-supervised driven consistency training for annotation efficient histopathology image analysis. arXiv preprint arXiv:2102.03897 (2021)
Srinivas, A., Lin, T.Y., Parmar, N., Shlens, J., Abbeel, P., Vaswani, A.: Bottleneck transformers for visual recognition. arXiv preprint arXiv:2101.11605 (2021)
Tellez, D., et al.: Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Med. Image Anal. 58, 1–9 (2019)
Wei, J., et al.: A petri dish for histopathology image analysis. arXiv preprint arXiv:2101.12355 (2021)
Wu, B., et al.: Visual transformers: token-based image representation and processing for computer vision. arXiv preprint arXiv:2006.03677 (2020)
Yuan, L., et al.: Tokens-to-Token ViT: training vision transformers from scratch on ImageNet. arXiv preprint arXiv:2101.11986 (2021)
Acknowledgements
This research was funded by the National Natural Science Foundation of China (No. 61571314), Science & technology department of Sichuan Province, (No. 2020YFG0081), and the Innovative Youth Projects of Ocean Remote Sensing Engineering Technology Research Center of State Oceanic Administration of China (No. 2015001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X. et al. (2021). TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12908. Springer, Cham. https://doi.org/10.1007/978-3-030-87237-3_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-87237-3_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87236-6
Online ISBN: 978-3-030-87237-3
eBook Packages: Computer ScienceComputer Science (R0)