Abstract
Machine learning methods strive to acquire a robust model during the training process that can effectively generalize to test samples, even in the presence of distribution shifts. However, these methods often suffer from performance degradation due to unknown test distributions. Test-time adaptation (TTA), an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions. Recent progress in this paradigm has highlighted the significant benefits of using unlabeled data to train self-adapted models prior to inference. In this survey, we categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation. For each category, we provide a comprehensive taxonomy of advanced algorithms and discuss various learning scenarios. Furthermore, we analyze relevant applications of TTA and discuss open challenges and promising areas for future research. For a comprehensive list of TTA methods, kindly refer to https://github.com/tim-learn/awesome-test-time-adaptation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
In this survey, we use the terms “test data" and “target data" interchangeably to refer to the data used for adaptation at test time.
Such a single-sample adaptation corresponds to a batch size of 1, a.k.a., test-time instance adaptation.
References
Agarwal, P., Paudel, D. P., Zaech, J.-N., & Van Gool, L. (2022) Unsupervised robust domain adaptation without source data. In Proceedings of WACV (pp. 2009–2018).
Ahmed, S. K. M., Lejbolle, A. R., Panda, R., & Roy-Chowdhury, A. K. (2020). Camera on-boarding for person re-identification using hypothesis transfer learning. In Proceedings of CVPR (pp. 12144–12153).
Ahmed, S. K. M., Lohit, S., Peng, K.-C., Jones, M., & Roy-Chowdhury, A. K. (2022). Cross-modal knowledge transfer without task-relevant source data. In Proceedings of ECCV (pp. 111–127).
Ahmed, W., Morerio, P., & Murino, V. (2022). Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of WACV (pp. 1616–1625).
Ahmed, S. K. M., Raychaudhuri, D. S., Paul, S., Oymak, S., & Roy-Chowdhury, A. K. (2021). Unsupervised multi-source domain adaptation without access to source data. In Proceedings of CVPR (pp. 10103–10112).
Alet, F., Bauza, M., Kawaguchi, K., Kuru, N. G., Lozano-Perez, T., & Kaelbling, L. P. (2021). Tailoring: Encoding inductive biases by optimizing unsupervised objectives at prediction time. In Proceedings of NeurIPS (pp. 29206–29217).
Alexandari, A., Kundaje, A., & Shrikumar, A. (2020). Maximum likelihood with bias-corrected calibration is hard-to-beat at label shift adaptation. In Proceedings of ICML (pp. 222–232).
Alfarra, M., Pérez, J. C., Thabet, A., Bibi, A., Torr, P. H. S., & Ghanem, B. (2022). Combating adversaries with anti-adversaries. In Proceedings of AAAI (pp. 5992–6000).
An, Q., Li, R., Gu, L., Zhang, H., Chen, Q., Lu, Z., Wang, F., & Zhu, Y. (2022). A privacy-preserving unsupervised domain adaptation framework for clinical text analysis. arXiv:2201.07317.
Ao, S., Li, X., & Ling, C. (2017). Fast generalized distillation for semi-supervised domain adaptation. In Proceedings of AAAI (pp. 1719–1725).
Ayyoubzadeh, S. M., Liu, W., Kezele, I., Yu, Y., Wu, X., Wang, Y., & Jin, T. (2023). Test-time adaptation for optical flow estimation using motion vectors. IEEE Transactions on Image Processing, 32, 4977–4988.
Azimi, F., Palacio, S., Raue, F., Hees, J., Bertinetto, L., & Dengel, A. (2022). Self-supervised test-time adaptation on video data. In Proceedings of WACV (pp. 3439–3448).
Azizzadenesheli, K., Liu, A., Yang, F., & Anandkumar, A. (2019). Regularized learning for domain adaptation under label shifts. In Proceedings of ICLR.
Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. In Proceedings of NeurIPS workshops.
Baevski, A., Zhou, Y., Mohamed, A., & Auli, M. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. In Proceedings of NeurIPS (pp. 12449–12460).
Bahmani, S., Hahn, O., Zamfir, E., Araslanov, N., Cremers, D., & Roth, S. (2022). Semantic self-adaptation: Enhancing generalization with a single sample. In Proceedings of ECCV workshops.
Bahng, H., Jahanian, A., Sankaranarayanan, S., & Isola, P. (2022). Visual prompting: Modifying pixel space to adapt pre-trained models. arXiv:2203.17274.
Banerjee, P., Gokhale, T., & Baral, C. (2021). Self-supervised test-time learning for reading comprehension. In Proceedings of NAACL (pp. 1200–1211).
Bao, W., Wei, T., Wang, H., & He, J. (2023). Adaptive test-time personalization for federated learning. In Proceedings of NeurIPS.
Bateson, M., Lombaert, H., & Ayed, I. B. (2022). Test-time adaptation with shape moments for image segmentation. In Proceedings of MICCAI (pp. 736–745).
Bateson, M., Kervadec, H., Dolz, J., Lombaert, H., & Ayed, I. B. (2022). Source-free domain adaptation for image segmentation. Medical Image Analysis, 82, 102617.
Bau, D., Strobelt, H., Peebles, W., Wulff, J., Zhou, B., Zhu, J.-Y., & Torralba, A. (2019). Semantic photo manipulation with a generative image prior. ACM Transactions on Graphics, 38(4), 1–11.
Belli, D., Das, D., Major, B., & Porikli, F. (2022). Online adaptive personalization for face anti-spoofing. In Proceedings of ICIP (pp. 351–355).
Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., & Vaughan, J. W. (2010). A theory of learning from different domains. Machine Learning, 79, 151–175.
Ben-David, E., Oved, N., & Reichart, R. (2022). Pada: Example-based prompt learning for on-the-fly adaptation to unseen domains. Transactions of the Association for Computational Linguistics, 10, 414–433.
Berthelot, D., Carlini, N., Goodfellow, I., Oliver, A., Papernot, N., & Raffel, C. (2019). Mixmatch: A holistic approach to semi-supervised learning. In Proceedings of NeurIPS (pp. 5049–5059).
Bertrand, J., Zilos, G. K., Kalantidis, Y., & Tolias, G. (2023). Test-time training for matching-based video object segmentation. In Proceedings of NeurIPS.
Bohdal, O., Li, D., Hu, S. X., & Hospedales, T. (2022). Feed-forward source-free latent domain adaptation via cross-attention. In Proceedings of ICML workshops.
Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., & Kasneci, G. (2022). Deep neural networks and tabular data: A survey. IEEE Transactions on Neural Networks and Learning Systems.
Borlino, F. C., Polizzotto, S., Caputo, B., & Tommasi, T. (2022). Self-supervision & meta-learning for one-shot unsupervised cross-domain detection. Computer Vision and Image Understanding, 223, 103549.
Boudiaf, M., Denton, T., Van Merriënboer, B., Dumoulin, V., & Triantafillou, E. (2023). In search for a generalizable method for source free domain adaptation. In Proceedings of ICML (pp. 2914–2931).
Boudiaf, M., Mueller, R., Ayed, I. B., & Bertinetto, L. (2022). Parameter-free online test-time adaptation. In Proceedings of CVPR (pp. 8344–8353).
Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of CVPR (pp. 3722–3731).
Brahma, D., & Rai, P. (2023). A probabilistic framework for lifelong test-time adaptation. In Proceedings of CVPR.
Brahmbhatt, S., Gu, J., Kim, K., Hays, J., & Kautz, J. (2018). Geometry-aware learning of maps for camera localization. In Proceedings of CVPR (pp. 2616–2625).
Cao, Z., Li, Z., Guo, X., & Wang, G. (2021). Towards cross-environment human activity recognition based on radar without source data. IEEE Transactions on Vehicular Technology, 70(11), 11843–11854.
Carlucci, F. M., D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Domain generalization by solving jigsaw puzzles. In Proceedings of CVPR (pp. 2229–2238).
Caron, M., Bojanowski, P., Joulin, A., & Douze, M. (2018). Deep clustering for unsupervised learning of visual features. In Proceedings of ECCV (pp. 132–149).
Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., & Joulin, A. (2020). Unsupervised learning of visual features by contrasting cluster assignments. In Proceedings of NeurIPS (pp. 9912–9924).
Chen, Y.-H., Chen, W.-Y., Chen, Y.-T., Tsai, B.-C., Wang, Y.-C.F., & Sun, M. (2017). No more discrimination: Cross city adaptation of road scene segmenters. In Proceedings of ICCV (pp. 1992–2001).
Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In Proceedings of ICML (pp. 1597–1607).
Chen, W., Lin, L., Yang, S., Xie, D., Pu, S., Zhuang, Y., & Ren, W. (2022). Self-supervised noisy label learning for source-free unsupervised domain adaptation. In Proceedings of IROS (pp. 10185–10192).
Chen, C., Liu, Q., Jin, Y., Dou, Q., & Heng, P.-A. (2021). Source-free domain adaptive fundus image segmentation with denoised pseudo-labeling. In Proceedings of MICCAI (pp. 225–235).
Chen, W.-Y., Liu, Y.-C., Kira, Z., Wang, Y.-C.F., & Huang, J.-B. (2018). A closer look at few-shot classification. ICLR: In Proceedings of
Chen, Y., Mancini, M., Zhu, X., & Akata, Z. (2022). Semi-supervised and unsupervised deep visual learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Chen, Y., Schmid, C., & Sminchisescu, C. (2019). Self-supervised learning with geometric constraints in monocular video: Connecting flow, depth, and camera. In Proceedings of ICCV (pp. 7063–7072).
Chen, D., Wang, D., Darrell, T., & Ebrahimi, S. (2022). Contrastive test-time adaptation. In Proceedings of CVPR (pp. 295–305).
Chen, J., Xian, X., Yang, Z., Chen, T., Lu, Y., Shi, Y., Pan, J., & Lin, L. (2023). Open-world pose transfer via sequential test-time adaption. In Proceedings of CVPR.
Chen, M., Xue, H., & Cai, D. (2019). Domain adaptation for semantic segmentation with maximum squares loss. In Proceedings of ICCV (pp. 2090–2099).
Chi, Z., Wang, Y., Yu, Y., & Tang, J. (2021). Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In Proceedings of CVPR (pp. 9137–9146).
Chidlovskii, B., Clinchant, S., & Csurka, G. (2016). Domain adaptation in the absence of source domain data. In Proceedings of KDD (pp. 451–460).
Choi, S., Yang, S., Choi, S., & Yun, S. (2022). Improving test-time adaptation via shift-agnostic weight regularization and nearest source prototypes. In Proceedings of ECCV (pp. 440–458).
Choi, M., Choi, J., Baik, S., Kim, T. H., & Lee, K. M. (2021). Test-time adaptation for video frame interpolation via meta-learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12), 9615–9628.
Chu, T., Liu, Y., Deng, J., Li, W., & Duan, L. (2022). Denoised maximum classifier discrepancy for source free unsupervised domain adaptation. In Proceedings of AAAI (pp. 472–480).
Clinchant, S., Chidlovskii, B., & Csurka, G. (2016). Transductive adaptation of black box predictions. In Proceedings of ACL (pp. 326–331).
Conti, A., Rota, P., Wang, Y., & Ricci, E. (2022). Cluster-level pseudo-labelling for source-free cross-domain facial expression recognition. In Proceedings of BMVC.
Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. V. (2020). Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of CVPR workshops.
Cui, S., Wang, S., Zhuo, J., Li, L., Huang, Q., & Tian, Q. (2020). Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Proceedings of CVPR (pp. 3941–3950).
Darestani, M. Z., Liu, J., & Heckel, R. (2022). Test-time training can close the natural distribution shift performance gap in deep learning based compressed sensing. In Proceedings of ICML (pp. 4754–4776).
Das, D., Borse, S., Park, H., Azarian, K., Cai, H., Garrepalli, R., & Porikli, F. (2023). Transadapt: A transformative framework for online test time adaptive semantic segmentation. In Proceedings of ICASSP (pp. 1–5).
Deng, Z., Chen, Z., Niu, S., Li, T., Zhuang, B., & Tan, M. (2023). Efficient test-time adaptation for super-resolution with second-order degradation and reconstruction. In Proceedings of NeurIPS.
Deng, B., Zhang, Y., Tang, H., Ding, C., & Jia, K. (2021). On universal black-box domain adaptation. arXiv:2104.04665.
Ding, N., Xu, Y., Tang, Y., Xu, C., Wang, Y., & Tao, D. (2022). Source-free domain adaptation via distribution estimation. In Proceedings of CVPR (pp. 7212–7222).
Ding, Y., Liang, J., Jiang, B., Zheng, A., & He, R. (2024). Maps: A noise-robust progressive learning approach for source-free domain adaptive keypoint detection. IEEE Transactions on Circuits and Systems for Video Technology, 34(3), 1376–1387.
Ding, Y., Sheng, L., Liang, J., Zheng, A., & He, R. (2023). Proxymix: Proxy-based mixup training with label refinery for source-free domain adaptation. Neural Networks, 167, 92–103.
D’Innocente, A., Borlino, F. C., Bucci, S., Caputo, B., & Tommasi, T. (2020). One-shot unsupervised cross-domain detection. In Proceedings of ECCV (pp. 732–748).
D’Innocente, A., Bucci, S., Caputo, B., & Tommasi, T. (2019). Learning to generalize one sample at a time with self-supervision. arXiv:1910.03915.
Döbler, M., Marsden, R. A., & Yang, B. (2023). Robust mean teacher for continual and gradual test-time adaptation. In Proceedings of CVPR.
Dong, J., Fang, Z., Liu, A., Sun, G., & Liu, T. (2021). Confident anchor-induced multi-source free domain adaptation. In Proceedings of NeurIPS (pp. 2848–2860).
Dubey, A., Ramanathan, V., Pentland, A., & Mahajan, D. (2021). Adaptive methods for real-world domain generalization. In Proceedings of CVPR (pp. 14340–14349).
Eshete, B. (2021). Making machine learning trustworthy. Science, 373(6556), 743–744.
Fang, Y., Yap, P.-T., Lin, W., Zhu, H., & Liu, M. (2024). Source-free unsupervised domain adaptation: A survey. Neural Networks, 106230.
Feng, C.-M., Yu, K., Liu, Y., Khan, S., & Zuo, W. (2023). Diverse data augmentation with diffusions for effective test-time prompt tuning. In Proceedings of ICCV (pp. 2704–2714).
Feng, Z., Xu, C., & Tao, D. (2021). Open-set hypothesis transfer with semantic consistency. IEEE Transactions on Image Processing, 30, 6473–6484.
Finn, C., Abbeel, P., & Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of ICML (pp. 1126–1135).
Gal, Y., & Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of ICML (pp. 1050–1059).
Gan, Y., Ma, X., Lou, Y., Bai, Y., Zhang, R., Shi, N., & Luo, L. (2023). Decorate the newcomers: Visual domain prompt for continual test time adaptation. In Proceedings of AAAI.
Gandelsman, Y., Sun, Y., Chen, X., & Efros, A. A. (2022). Test-time training with masked autoencoders. In Proceedings of NeurIPS (pp. 29374–29385).
Ganin, Y., & Lempitsky, V. (2015). Unsupervised domain adaptation by backpropagation. In Proceedings of ICML (pp. 1180–1189).
Gao, J., Zhang, J., Liu, X., Darrell, T., Shelhamer, E., & Wang, D. (2023). Back to the source: Diffusion-driven adaptation to test-time corruption. In Proceedings of CVPR.
Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of CVPR (pp. 2414–2423).
Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. In Proceedings of ICLR.
Gong, T., Jeong, J., Kim, T., Kim, Y., Shin, J., & Lee, S.-J. (2022). Note: Robust continual test-time adaptation against temporal correlation. In Proceedings of NeurIPS (pp. 27253–27266).
Goyal, S., Sun, M., Raghunathan, A., & Kolter, Z. (2022). Test-time adaptation via conjugate pseudo-labels. In Proceedings of NeurIPS (pp. 6204–6218).
Grandvalet, Y., & Bengio, Y. (2004). Semi-supervised learning by entropy minimization. In Proceedings of NeurIPS (pp. 529–536).
Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A. (2012). A kernel two-sample test. Journal of Machine Learning Research, 13(1), 723–773.
Grinsztajn, L., Oyallon, E., & Varoquaux, G. (2022). Why do tree-based models still outperform deep learning on typical tabular data? In Proceedings of NeurIPS (pp. 507–520).
Guan, S., Xu, J., Wang, Y., Ni, B., & Yang, X. (2021). Bilevel online adaptation for out-of-domain human mesh reconstruction. In Proceedings of CVPR (pp. 10472–10481).
Gui, S., Li, X., & Ji, S. (2024). Active test-time adaptation: Theoretical analyses and an algorithm. In Proceedings of ICLR.
Gulrajani, I., & Lopez-Paz, D. (2020). In search of lost domain generalization. In Proceedings of ICLR.
Guo, C., Rana, M., Cisse, M., & van der Maaten, L. (2018). Countering adversarial images using input transformations. In Proceedings of ICLR.
Hansen, N., Jangir, R., Sun, Y., Alenyà, G., Abbeel, P., Efros, A. A., Pinto, L., & Wang, X. (2021). Self-supervised policy adaptation during deployment. In Proceedings of ICLR.
Hardt, M., & Sun, Y. (2024). Test-time training on nearest neighbors for large language models. In Proceedings of ICLR.
He, Y., Carass, A., Zuo, L., Dewey, B. E., & Prince, J. L. (2021). Autoencoder based self-supervised test-time adaptation for medical image analysis. Medical Image Analysis, 102136.
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of CVPR (pp. 16000–16009).
He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of CVPR (pp. 9729–9738).
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of CVPR (pp. 770–778).
Hoffman, J., Tzeng, E., Park, T., Zhu, J.-Y., Isola, P., Saenko, K., Efros, A., & Darrell, T. (2018). Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of ICML (pp. 1989–1998).
Hong, S., & Kim, S. (2021). Deep matching prior: Test-time optimization for dense correspondence. In Proceedings of ICCV (pp. 9907–9917).
Hong, J., Lyu, L., Zhou, J., & Spranger, M. (2023). Mecta: Memory-economic continual test-time model adaptation. In Proceedings of ICLR.
Hospedales, T., Antoniou, A., Micaelli, P., & Storkey, A. (2021). Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 5149–5169.
Hou, Y., & Zheng, L. (2020). Source free domain adaptation with image translation. arXiv:2008.07514.
Hou, Y., & Zheng, L. (2021). Visualizing adapted knowledge in domain transfer. In Proceedings of CVPR (pp. 13824–13833).
Hu, S., Liao, Z., & Xia, Y. (2022). Prosfda: Prompt learning based source-free domain adaptation for medical image segmentation. arXiv:2211.11514.
Hu, W., Miyato, T., Tokui, S., Matsumoto, E., & Sugiyama, M. (2017). Learning discrete representations via information maximizing self-augmented training. In Proceedings of ICML (pp. 1558–1567).
Hu, M., Song, T., Gu, Y., Luo, X., Chen, J., Chen, Y., Zhang, Y., & Zhang, S. (2021). Fully test-time adaptation for image segmentation. In Proceedings of MICCAI (pp. 251–260).
Hu, X., Uzunbas, G., Chen, S., Wang, R., Shah, A., Nevatia, R., & Lim, S.-N. (2021). Mixnorm: Test-time adaptation through online normalization estimation. arXiv:2110.11478.
Hu, X., Zhang, K., Xia, L., Chen, A., Luo, J., Sun, Y., Wang, K., Qiao, N., Zeng, X., & Sun, M. et al. (2024) Reclip: Refine contrastive language image pre-training with source free domain adaptation. In Proceedings of WACV (pp. 2994–3003).
Huang, J., Guan, D., Xiao, A., & Lu, S. (2021). Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data. In Proceedings of NeurIPS (pp. 3635–3649).
Huang, Y., Yang, X., Zhang, J., & Xu, C. (2022). Relative alignment network for source-free multimodal video domain adaptation. In Proceedings of ACM-MM (pp. 1652–1660).
Hussein, S. A., Tirer, T., & Giryes, R. (2020). Image-adaptive gan based reconstruction. In Proceedings of AAAI (pp. 3121–3129).
Ioffe, S. (2017). Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In Proceedings of NeurIPS (pp. 1942–1950).
Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of ICML (pp. 448–456).
Iscen, A., Tolias, G., Avrithis, Y., & Chum, O. (2019). Label propagation for deep semi-supervised learning. In Proceedings of CVPR (pp. 5070–5079).
Ishii, M., & Sugiyama, M. (2021). Source-free domain adaptation via distributional alignment by matching batch normalization statistics. arXiv:2101.10842.
Iwasawa, Y., & Matsuo, Y. (2021). Test-time classifier adjustment module for model-agnostic domain generalization. In Proceedings of NeurIPS (pp. 2427–2440).
Jain, V., & Learned-Miller, E. (2011). Online domain adaptation of a pre-trained cascade of classifiers. In Proceedings of CVPR (pp. 577–584).
Jamal, M. A., Li, H., & Gong, B. (2018). Deep face detector adaptation without negative transfer or catastrophic forgetting. In Proceedings of CVPR (pp. 5608–5618).
Jang, M., Chung, S.-Y., & Chung, H. W. (2023). Test-time adaptation via self-training with nearest neighbor information. In Proceedings of ICLR.
Jiang, L., & Lin, T. (2023). Test-time robust personalization for federated learning. In Proceedings of ICLR.
Jiao, J., Li, H., Zhang, T., & Lin, J. (2022). Source-free adaptation diagnosis for rotating machinery. IEEE Transactions on Industrial Informatics.
Jin, Y., Wang, X., Long, M., & Wang, J. (2020). Minimum class confusion for versatile domain adaptation. In Proceedings of ECCV (pp. 464–480).
Jin, W., Zhao, T., Ding, J., Liu, Y., Tang, J., & Shah, N. (2023). Empowering graph representation learning with test-time graph transformation. In Proceedings of ICLR.
Jing, M., Zhen, X., Li, J., & Snoek, C. G. M. (2022). Variational model perturbation for source-free domain adaptation. In Proceedings of NeurIPS (pp. 17173–17187).
Jing, L., & Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11), 4037–4058.
Joachims, T. (1999). Transductive inference for text classification using support vector machines. In Proceedings of ICML (pp. 200–209).
Jung, S., Lee, J., Kim, N., Shaban, A., Boots, B., & Choo, J. (2023). Cafa: Class-aware feature alignment for test-time adaptation. In Proceedings of ICCV (pp. 19060–19071).
Kan, Z., Chen, S., Li, Z., & He, Z. (2022). Self-constrained inference optimization on structural groups for human pose estimation. In Proceedings of ECCV (pp. 729–745).
Karani, N., Erdil, E., Chaitanya, K., & Konukoglu, E. (2021). Test-time adaptable neural networks for robust medical image segmentation. Medical Image Analysis, 68, 101907.
Karim, N., Mithun, N. C., & Rajvanshi, A., et al. (2023) C-sfda: A curriculum learning aided self-training framework for efficient source free domain adaptation. In Proceedings of CVPR.
Karmanov, A., Guan, D., Lu, S., Saddik, A. E., & Xing, E. (2024). Efficient test-time adaptation of vision-language models. In Proceedings of CVPR.
Kenton, J.D.M.-W.C., & Toutanova, L. K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL (pp. 4171–4186).
Khurana, A., Paul, S., Rai, P., Biswas, S., & Aggarwal, G. (2021). Sita: Single image test-time adaptation. arXiv:2112.02355.
Kim, J., Hwang, I., & Kim, Y. M. (2022). Ev-tta: Test-time adaptation for event-based object recognition. In Proceedings of CVPR (pp. 17745–17754).
Kim, I., Kim, Y., & Kim, S. (2020). Learning loss for test-time augmentation. In Proceedings of NeurIPS (pp. 4163–4174).
Kim, J., Lee, J.-T., Chang, S., & Kwak, N. (2022). Variational on-the-fly personalization. In Proceedings of ICML (pp. 11134–11147).
Kim, E., Sun, M., Raghunathan, A., & Kolter, J. Z. (2023). Reliable test-time adaptation via agreement-on-the-line. In Proceedings of NeurIPS workshops.
Kim, Y., Yim, J., Yun, J., & Kim, J. (2019). Nlnl: Negative learning for noisy labels. In Proceedings of ICCV (pp. 101–110).
Kim, Y., Cho, D., Han, K., Panda, P., & Hong, S. (2021). Domain adaptation without source data. IEEE Transactions on Artificial Intelligence, 2(6), 508–518.
Kim, S., Min, Y., Jung, Y., & Kim, S. (2024). Controllable style transfer via test-time training of implicit neural representation. Pattern Recognition, 146, 109988.
Kingetsu, H., Kobayashi, K., Okawa, Y., Yokota, Y., & Nakazawa, K. (2022). Multi-step test-time adaptation with entropy minimization and pseudo-labeling. In Proceedings of ICIP (pp. 4153–4157).
Kojima, T., Matsuo, Y., & Iwasawa, Y. (2022). Robustifying vision transformer without retraining from scratch by test-time class-conditional feature alignment. In Proceedings of IJCAI (pp. 1009–1016).
Kong, F., Yuan, S., Hao, W., & Henao, R. (2023). Mitigating test-time bias for fair image retrieval. In Proceedings of NeurIPS.
Kothandaraman, D., Shekhar, S., Sancheti, A., Ghuhan, M., Shukla, T., & Manocha, D. (2023). Salad: Source-free active label-agnostic domain adaptation for classification, segmentation and detection. In Proceedings of WACV (pp. 382–391).
Kouw, W. M., & Loog, M. (2019). A review of domain adaptation without target labels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(3), 766–785.
Krause, A., Perona, P., & Gomes, R. (2010). Discriminative clustering by regularized information maximization. In Proceedings of NeurIPS (pp. 775–783).
Kumar, V., Lal, R., Patil, H., & Chakraborty, A. (2023). Conmix for source-free single and multi-target domain adaptation. In Proceedings of WACV (pp. 4178–4188).
Kundu, J. N., Bhambri, S., Kulkarni, A., Sarkar, H., Jampani, V., & Babu, R. V. (2022). Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In Proceedings of ECCV (pp. 177–194).
Kundu, J. N., Kulkarni, A., Bhambri, S., Mehta, D., Kulkarni, S., Jampani, V., & Babu, R. V. (2022). Balancing discriminability and transferability for source-free domain adaptation. In Proceedings of ICML (pp. 11710–11728).
Kundu, J. N., Kulkarni, A., Singh, A., Jampani, V., & Babu, R. V. (2021). Generalize then adapt: Source-free domain adaptive semantic segmentation. In Proceedings of ICCV (pp. 7046–7056).
Kundu, J. N., Seth, S., Pradyumna, Y. M., Jampani, V., Chakraborty, A., & Babu, R. V. (2022). Uncertainty-aware adaptation for self-supervised 3d human pose estimation. In Proceedings of CVPR (pp. 20448–20459).
Kundu, J. N., Venkat, N., & Babu, R. V. (2020). Universal source-free domain adaptation. In Proceedings of CVPR (pp. 4544–4553).
Kundu, J. N., Venkat, N., Revanur, A., & Babu, R. V. (2020). Towards inheritable models for open-set domain adaptation. In Proceedings of CVPR (pp. 12376–12385).
Kurmi, V. K., Subramanian, V. K., & Namboodiri, V. P. (2021). Domain impression: A source data free domain adaptation method. In Proceedings of WACV (pp. 615–625).
Kuzborskij, I., & Orabona, F. (2013). Stability and hypothesis transfer learning. In Proceedings of ICML (pp. 942–950).
Kuznietsov, Y., Proesmans, M., & Van Gool, L. (2022). Towards unsupervised online domain adaptation for semantic segmentation. In Proceedings of WACV workshops (pp. 261–271).
Laine, S., & Aila, T. (2017). Temporal ensembling for semi-supervised learning. In Proceedings of ICLR.
Lao, Q., Jiang, X., & Havaei, M. (2021). Hypothesis disparity regularized mutual information maximization. In Proceedings of AAAI (pp. 8243–8251).
Laparra, E., Su, X., Zhao, Y., Uzuner, O., Miller, T., & Bethard, S. (2021). Semeval-2021 task 10: Source-free domain adaptation for semantic processing. In International workshop on semantic evaluation (SemEval) (pp. 348–356).
Lee, D.-H. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Proceedings of ICML workshops.
Lee, P., Jeon, S., Hwang, S., Shin, M., & Byun, H. (2023). Source-free subject adaptation for eeg-based visual recognition. In Proceedings of BCI (pp. 1–6).
Lee, J., Jung, D., Yim, J., & Yoon, S. (2022). Confidence score for source-free unsupervised domain adaptation. In Proceedings of ICML (pp. 12365–12377).
Lee, J., & Lee, G. (2023). Feature alignment by uncertainty and self-training for source-free unsupervised domain adaptation. Neural Networks, 161, 682–692.
Li, W., Cao, M., & Chen, S. (2022). Jacobian norm for unsupervised source-free domain adaptation. arXiv:2204.03467.
Li, X., Chen, W., Xie, D., Yang, S., Yuan, P., Pu, S., & Zhuang, Y. (2021). A free lunch for unsupervised domain adaptive object detection without source data. In Proceedings of AAAI (pp. 8474–8481).
Li, X., Du, Z., Li, J., Zhu, L., & Lu, K. (2022). Source-free active domain adaptation via energy-based locality preserving transfer. In Proceedings of ACM-MM (pp. 5802–5810).
Li, R., Jiao, Q., Cao, W., Wong, H.-S., & Wu, S. (2020). Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of CVPR (pp. 9641–9650).
Li, X., Li, J., Zhu, L., Wang, G., & Huang, Z. (2021). Imbalanced source-free domain adaptation. In Proceedings of ACM-MM (pp. 3330–3339).
Li, X., Liu, S., De Mello, S., Kim, K., Wang, X., Yang, M.-H., & Kautz, J. (2020) Online adaptation for consistent mesh reconstruction in the wild. In Proceedings of NeurIPS (pp. 15009–15019).
Li, H., Liu, H., Hu, D., Wang, J., Johnson, H., Sherbini, O., Gavazzi, F., D’Aiello, R., Vanderver, A., Long, J., Jane, P., & Oguz, I. (2022). Self-supervised test-time adaptation for medical image segmentation. In Proceedings of MICCAI workshops.
Li, Z., Togo, R., Ogawa, T., & Haseyama, M. (2022). Union-set multi-source model adaptation for semantic segmentation. In Proceedings of ECCV (pp. 579–595).
Li, Y., Wang, N., Liu, J., & Hou, X. (2017). Demystifying neural style transfer. In Proceedings of IJCAI (pp. 2230–2236).
Li, Y., Wang, N., Shi, J., Liu, J., & Hou, X. (2017). Revisiting batch normalization for practical domain adaptation. In Proceedings of ICLR.
Li, D., Yang, Y., Song, Y.-Z., & Hospedales, T. M. (2018). Learning to generalize: meta-learning for domain generalization. In Proceedings of AAAI (pp. 3490–3497).
Li, S., Ye, M., Zhu, X., Zhou, L., & Xiong, L. (2022). Source-free object detection by learning to overlook domain style. In Proceedings of CVPR (pp. 8014–8023).
Li, J., Yu, Z., Du, Z., Zhu, L., & Shen, H. T. (2024). A comprehensive survey on source-free domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.-Z., & Hospedales, T.M. (2019). Episodic training for domain generalization. In Proceedings of ICCV (pp. 1446–1455).
Liang, J., He, R., Sun, Z., & Tan, T. (2019). Distant supervised centroid shift: A simple and efficient approach to visual domain adaptation. In Proceedings of CVPR (pp. 2975–2984).
Liang, J., Hu, D., & Feng, J. (2020). Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In Proceedings of ICML (pp. 6028–6039).
Liang, J., Hu, D., & Feng, J. (2021). Domain adaptation with auxiliary target domain-oriented classifier. In Proceedings of CVPR (pp. 16632–16642).
Liang, J., Hu, D., Feng, J., & He, R. (2021). Umad: Universal model adaptation under domain and category shift. arXiv:2112.08553.
Liang, J., Hu, D., Feng, J., & He, R. (2022). Dine: Domain adaptation from single and multiple black-box predictors. In Proceedings of CVPR (pp. 8003–8013).
Liang, J., Wang, Y., Hu, D., He, R., & Feng, J. (2020). A balanced and uncertainty-aware approach for partial domain adaptation. In Proceedings of ECCV (pp. 123–140).
Liang, J., Hu, D., Wang, Y., He, R., & Feng, J. (2022). Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 8602–8617.
Lim, H., Kim, B., Choo, J., & Choi, S. (2023). Ttn: A domain-shift aware batch normalization in test-time adaptation. In Proceedings of ICLR.
Lin, G.-T., Li, S.-W., & Lee, H.-y. (2022). Listen, adapt, better wer: Source-free single-utterance test-time adaptation for automatic speech recognition. In Proceedings of Interspeech (pp. 2198–2202).
Lipton, Z., Wang, Y.-X., & Smola, A. (2018). Detecting and correcting for label shift with black box predictors. In Proceedings of ICML (pp. 3122–3130).
Litrico, M., Bue, A. D., & Morerio, P. (2023). Guiding pseudo-labels with uncertainty estimation for test-time adaptation. In Proceedings of CVPR.
Liu, Z., & Fang, Y. (2023). Learning adaptable risk-sensitive policies to coordinate in multi-agent general-sum games. In Proceedings of ICONIP (pp. 27–40).
Liu, Y., Chen, Y., Dai, W., Gou, M., Huang, C.-T., & Xiong, H. (2022). Source-free domain adaptation with contrastive domain alignment and self-supervised exploration for face anti-spoofing. In Proceedings of ECCV (pp. 511–528).
Liu, Q., Chen, C., Dou, Q., & Heng, P.-A. (2022). Single-domain generalization in medical image segmentation via test-time adaptation from shape dictionary. In Proceedings of AAAI (pp. 1756–1764).
Liu, H., Chi, Z., Yu, Y., Wang, Y., Chen, J., & Tang, J. (2023). Meta-auxiliary learning for future depth prediction in videos. In Proceedings of WACV (pp. 5756–5765).
Liu, Y., Kothari, P., van Delft, B., Bellot-Gurlet, B., Mordan, T., & Alahi, A. (2021). Ttt++: When does self-supervised test-time training fail or thrive? In Proceedings of NeurIPS (pp. 21808–21820).
Liu, J., Li, X., An, S., & Chen, Z. (2022). Source-free unsupervised domain adaptation for blind image quality assessment. arXiv:2207.08124.
Liu, C., Wang, L., Lyu, L., Sun, C., Wang, X., & Zhu, Q. (2023). Twofer: Tackling continual domain shift with simultaneous domain generalization and adaptation. In Proceedings of ICLR.
Liu, H., Wu, Z., Li, L., Salehkalaibar, S., Chen, J., & Wang, K. (2022). Towards multi-domain single image dehazing via test-time training. In Proceedings of CVPR (pp. 5831–5840).
Liu, X., Xing, F., Yang, C., El Fakhri, G., & Woo, J. (2021). Adapting off-the-shelf source segmenter for target medical image segmentation. In Proceedings of MICCAI (pp. 549–559).
Liu, Y., Zhang, W., & Wang, J. (2021). Source-free domain adaptation for semantic segmentation. In Proceedings of CVPR (pp. 1215–1224).
Liu, Y., Zhang, W., Wang, J., & Wang, J. (2021). Data-free knowledge transfer: A survey. arXiv:2112.15278.
Liu, X., & Yuan, Y. (2022). A source-free domain adaptive polyp detection framework with style diversification flow. IEEE Transactions on Medical Imaging, 41(7), 1897–1908.
Liu, C., Zhou, L., Ye, M., & Li, X. (2022). Self-alignment for black-box domain adaptation of image classification. IEEE Signal Processing Letters, 29, 1709–1713.
Long, M., Cao, Y., Wang, J., & Jordan, M. (2015). Learning transferable features with deep adaptation networks. In Proceedings of ICML (pp. 97–105).
Lumentut, J. S., & Park, I. K. (2022). 3d body reconstruction revisited: Exploring the test-time 3d body mesh refinement strategy via surrogate adaptation. In Proceedings of ACM-MM (pp. 5923–5933).
Luo, X., Chen, W., Tan, Y., Li, C., He, Y., & Jia, X. (2021). Exploiting negative learning for implicit pseudo label rectification in source-free domain adaptive semantic segmentation. arXiv:2106.12123.
Luo, Y., Liu, P., Guan, T., Yu, J., & Yang, Y. (2020). Adversarial style mining for one-shot unsupervised domain adaptation. In Proceedings of NeurIPS (pp. 20612–20623).
Lyu, F., Ye, M., Ma, A. J., Yip, T.C.-F., Wong, G.L.-H., & Yuen, P. C. (2022). Learning from synthetic CT images via test-time training for liver tumor segmentation. IEEE Transactions on Medical Imaging, 41(9), 2510–2520.
Ma, W., Chen, C., Zheng, S., Qin, J., Zhang, H., & Dou, Q. (2022). Test-time adaptation with calibration of medical image classification nets for label distribution shift. In Proceedings of MICCAI (pp. 313–323).
Ma, X., Zhang, J., Guo, S., & Xu, W. (2023). Swapprompt: Test-time prompt adaptation for vision-language models. In Proceedings of NeurIPS.
Ma, N., Bu, J., Lu, L., Wen, J., Zhou, S., Zhang, Z., Gu, J., Li, H., & Yan, X. (2022). Context-guided entropy minimization for semi-supervised domain adaptation. Neural Networks, 154, 270–282.
Mancini, M., Karaoguz, H., Ricci, E., Jensfelt, P., & Caputo, B. (2018). Kitting in the wild through online domain adaptation. In Proceedings of IROS (pp. 1103–1109).
Mao, C., Chiquier, M., Wang, H., Yang, J., & Vondrick, C. (2021). Adversarial attacks are reversible with natural supervision. In Proceedings of ICCV (pp. 661–671).
Marsden, R. A., Döbler, M., & Yang, B. (2024). Universal test-time adaptation through weight ensembling, diversity weighting, and prior correction. In Proceedings of WACV (pp. 2555–2565).
Min, C., Kim, T., & Lim, J. (2023). Meta-learning for adaptation of deep optical flow networks. In Proceedings of WACV (pp. 2145–2154).
Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv:1411.1784.
Mirza, M. J., Micorek, J., Possegger, H., & Bischof, H. (2022). The norm must go on: Dynamic unsupervised domain adaptation by normalization. In Proceedings of CVPR (pp. 14765–14775).
Mirza, M. J., Soneira, P. J., Lin, W., Kozinski, M., Possegger, H., & Bischof, H. (2023). Actmad: Activation matching to align distributions for test-time-training. In Proceedings of CVPR (pp. 24152–24161).
Miyato, T., Maeda, S.-I., Koyama, M., & Ishii, S. (2018). Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8), 1979–1993.
Mohan, S., Vincent, J.L., Manzorro, R., Crozier, P., Fernandez-Granda, C., & Simoncelli, E. (2021). Adaptive denoising via gaintuning. In Proceedings of NeurIPS (pp. 23727–23740).
Moon, J. H., Das, D., Lee, C. S. G. (2020). Multi-step online unsupervised domain adaptation. In Proceedings of ICASSP (pp. 41172–41576).
Morerio, P., Volpi, R., Ragonesi, R., & Murino, V. (2020). Generative pseudo-label refinement for unsupervised domain adaptation. In Proceedings of WACV (pp. 3130–3139).
Müller, R., Kornblith, S., & Hinton, G. E. (2019). When does label smoothing help? In Proceedings of NeurIPS (pp. 4694–4703).
Mummadi, C. K., Hutmacher, R., Rambach, K., Levinkov, E., Brox, T., & Metzen, J. H. (2021). Test-time adaptation to distribution shift by confidence maximization and input transformation. arXiv:2106.14999.
Nado, Z., Padhy, S., Sculley, D., D’Amour, A., Lakshminarayanan, B., & Snoek, J. (2020). Evaluating prediction-time batch normalization for robustness under covariate shift. In Proceedings of ICML workshops.
Naik, A., Wu, Y., Naik, M., & Wong, E. (2023). Do machine learning models learn common sense? arXiv:2303.01433.
Nayak, G. K., Mopuri, K. R., Jain, S., & Chakraborty, A. (2022). Mining data impressions from deep models as substitute for the unavailable training data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 8465–8481.
Nelakurthi, A. R., Maciejewski, R., & He, J. (2018). Source free domain adaptation using an off-the-shelf classifier. In Proceedings of IEEE BigData (pp. 140–145).
Nitzan, Y., Aberman, K., He, Q., Liba, O., Yarom, M., Gandelsman, Y., Mosseri, I., Pritch, Y., & Cohen-Or, D. (2022). Mystyle: A personalized generative prior. ACM Transactions on Graphics, 41(6), 1–10.
Niu, S., Wu, J., Zhang, Y., Chen, Y., Zheng, S., Zhao, P., & Tan, M. (2022). Efficient test-time model adaptation without forgetting. In Proceedings of ICML (pp. 16888–16905).
Niu, S., Wu, J., Zhang, Y., Wen, Z., Chen, Y., Zhao, P., & Tan, M. (2023). Towards stable test-time adaptation in dynamic wild world. In Proceedings of ICLR.
Panagiotakopoulos, T., Dovesi, P. L., Härenstam-Nielsen, L., & Poggi, M. (2022). Online domain adaptation for semantic segmentation in ever-changing conditions. In Proceedings of ECCV (pp. 128–146).
Pandey, P., Raman, M., Varambally, S., & Prathosh A. P. (2021) Generalization on unseen domains via inference-time label-preserving target projections. In Proceedings of CVPR (pp. 12924–12933).
Pan, S. J., & Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359.
Park, S., Yoo, J., Cho, D., Kim, J., & Kim, T. H. (2020). Fast adaptation to super-resolution networks via meta-learning. In Proceedings of ECCV (pp. 754–769).
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of CVPR (pp. 2536–2544).
Paul, S., Saha, A., & Samanta, A. (2022). Ttt-ucdr: Test-time training for universal cross-domain retrieval. arXiv:2208.09198.
Peng, Q., Ding, Z., Lyu, L., Sun, L., & Chen, C. (2022). Toward better target representation for source-free and black-box domain adaptation. arXiv:2208.10531.
Pérez, J. C., Alfarra, M., Jeanneret, G., Rueda, L., Thabet, A., Ghanem, B., & Arbeláez, P. (2021). Enhancing adversarial robustness via test-time transformation ensembling. In Proceedings of ICCV (pp. 81–91).
Plananamente, M., Plizzari, C., & Caputo, B. (2022). Test-time adaptation for egocentric action recognition. In Proceedings of ICIAP (pp. 206-218).
Prabhu, V., Khare, S., Kartik, D., & Hoffman, J. (2022). Augco: Augmentation consistency-guided self-training for source-free domain adaptive semantic segmentation. arXiv:2107.10140.
Prabhudesai, M., Ke, T.-W., Li, A., Pathak, D., & Fragkiadaki, K. (2023). Test-time adaptation of discriminative models via diffusion generative feedback. In Proceedings of NeurIPS.
Press, O., Schneider, S., Kümmerer, M., & Bethge, M. (2023). Rdumb: A simple approach that questions our progress in continual test-time adaptation. In Proceedings of NeurIPS.
Qiu, Z., Zhang, Y., Lin, H., Niu, S., Liu, Y., Du, Q., & Tan, M. (2021). Source-free domain adaptation via avatar prototype generation and adaptation. In Proceedings of IJCAI (pp. 2921–2927).
Qu, S., Chen, G., Zhang, J., Li, Z., He, W., & Tao, D. (2022). Bmd: A general class-balanced multicentric dynamic prototype strategy for source-free domain adaptation. In Proceedings of ECCV (pp. 165—182).
Qu, S., Zou, T., Roehrbein, F., Lu, C., Chen, G., Tao, D., & Jiang, C. (2023). Upcycling models under domain and category shift. In Proceedings of CVPR.
Quinonero-Candela, J., Sugiyama, M., Schwaighofer, A., & Lawrence, N. D. (2008). Dataset shift in machine learning. MIT Press.
Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., & Clark, J. et al. (2021) Learning transferable visual models from natural language supervision. In Proceedings of ICML (pp. 8748–8763).
Ragab, M., Eldele, E., Tan, W. L., Foo, C.-S., Chen, Z., Wu, M., Kwoh, C.-K., & Li, X. (2023). Adatime: A benchmarking suite for domain adaptation on time series data. ACM Transactions on Knowledge Discovery from Data.
Reddy, N., Singhal, A., Kumar, A., Baktashmotlagh, M., & Arora, C. (2022). Master of all: simultaneous generalization of urban-scene segmentation to all adverse weather conditions. In Proceedings of ECCV (pp. 51–69).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of CVPR (pp. 10684–10695).
Rostami, M. (2021). Lifelong domain adaptation via consolidated internal distribution. In Proceedings of NeurIPS (pp. 11172–11183).
Roy, S., Trapp, M., Pilzer, A., Kannala, J., Sebe, N., Ricci, E., & Solin, A. (2022). Uncertainty-guided source-free domain adaptation. In Proceedings of ECCV (pp. 537–555).
RoyChowdhury, A., Chakrabarty, P., Singh, A., Jin, S., Jiang, H., Cao, L., & Learned-Miller, E. (2019). Automatic adaptation of object detectors to new domains using self-training. In Proceedings of CVPR (pp. 780–790).
Royer, A., & Lampert, C. H. (2015). Classifier adaptation at prediction time. In Proceedings of CVPR (pp. 1401–1409).
Rusak, E., Schneider, S., Pachitariu, G., Eck, L., Gehler, P. V., Bringmann, O., Brendel, W., & Bethge, M. (2022). If your data distribution shifts, use self-learning. Transactions on Machine Learning Research.
Saenko, K., Kulis, B., Fritz, M., & Darrell, T. (2010). Adapting visual category models to new domains. In Proceedings of ECCV (pp. 213–226).
Saerens, M., Latinne, P., & Decaestecker, C. (2002). Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation, 14(1), 21–41.
Sahoo, R., Shanmugam, D., & Guttag, J. (2020). Unsupervised domain adaptation in the absence of source data. In Proceedings of ICML Workshops.
Sain, A., Bhunia, A. K., Potlapalli, V., Chowdhury, P. N., Xiang, T., & Song, Y.-Z. (2022). Sketch3t: Test-time training for zero-shot sbir. In Proceedings of CVPR (pp. 7462–7471).
Saito, K., Watanabe, K., Ushiku, Y., & Harada, T. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of CVPR (pp. 3723–3732).
Saltori, C., Krivosheev, E., Lathuilière, S., Sebe, N., Galasso, F., Fiameni, G., Ricci, E., & Poiesi, F. (2022). Gipso: Geometrically informed propagation for online adaptation in 3D lidar segmentation. In Proceedings of ECCV (pp. 567–585).
Saltori, C., Lathuiliére, S., Sebe, N., Ricci, E., & Galasso, F. (2020). Sf-uda\(^{3D}\): Source-free unsupervised domain adaptation for lidar-based 3d object detection. In Proceedings of 3DV (pp. 771–780).
Samadh, J. H. A., Gani, H., Hussein, N. H., Khattak, M. U., Naseer, M., Khan, F., & Khan, S. (2023). Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization. In Proceedings of NeurIPS.
Sarkar, A., Sarkar, A., & Balasubramanian, V. N. (2022). Leveraging test-time consensus prediction for robustness against unseen noise. In Proceedings of WACV (pp. 1839–1848).
Schneider, S., Rusak, E., Eck, L., Bringmann, O., Brendel, W., & Bethge, M. (2020). Improving robustness against common corruptions by covariate shift adaptation. In Proceedings of NeurIPS (pp. 11539–11551).
Segu, M., Tonioni, A., & Tombari, F. (2023). Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135, 109115.
Seo, S., Suh, Y., Kim, D., Kim, G., Han, J., & Han, B. (2020). Learning to optimize domain specific normalization for domain generalization. In Proceedings of ECCV (pp. 68–83).
Shanmugam, D., Blalock, D., Balakrishnan, G., & Guttag, J. (2021). Better aggregation in test-time augmentation. In Proceedings of ICCV (pp. 1214–1223).
Sheng, L., Liang, J., He, R., Wang, Z., & Tan, T. (2023). Adaptguard: Defending against universal attacks for model adaptation. In Proceedings of ICCV (pp. 19093–19103).
Shi, Y., & Sha, F. (2012). Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In Proceedings of ICML (pp. 1275–1282).
Shi, C., Holtz, C., & Mishne, G. (2021). Online adversarial purification based on self-supervision. In Proceedings of ICLR.
Shin, I., Tsai, Y.-H., Zhuang, B., Schulter, S., Liu, B., Garg, S., Kweon, I. S., & Yoon, K.-J. (2022). Mm-tta: Multi-modal test-time adaptation for 3d semantic segmentation. In Proceedings of CVPR (pp. 16928–16937).
Shocher, A., Cohen, N., & Irani, M. (2018). “Zero-shot" super-resolution using deep internal learning. In Proceedings of CVPR (pp. 3118–3126).
Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 1–48.
Shu, M., Nie, W., De-An Huang, Yu, Z., Goldstein, T., Anandkumar, A., & Xiao, C. (2022). Test-time prompt tuning for zero-shot generalization in vision-language models. In Proceedings of NeurIPS (pp. 14274–14289).
Shwartz-Ziv, R., & Armon, A. (2022). Tabular data: Deep learning is not all you need. Information Fusion, 81, 84–90.
Sinha, S., Gehler, P., Locatello, F., & Schiele, B. (2023). Test: Test-time self-training under distribution shift. In Proceedings of WACV (pp. 2759–2769).
Šipka, T., Šulc, M., & Matas, J. (2022). The hitchhiker’s guide to prior-shift adaptation. In Proceedings of WACV (pp. 1516–1524).
Sivaprasad, P. T., & Fleuret, F. (2021). Test time adaptation through perturbation robustness. In Proceedings of NeurIPS workshops.
Sivaprasad, P. T., & Fleuret, F. (2021). Uncertainty reduction for model adaptation in semantic segmentation. In Proceedings of CVPR (pp. 9613–9623).
Smith, L., & Gal, Y. (2018). Understanding measures of uncertainty for adversarial example detection. In Proceedings of UAI (pp. 560–569).
Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C. A., Cubuk, E. D., Kurakin, A., & Li, C.-L. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Proceedings of NeurIPS (pp. 596–608).
Song, J., Lee, J., Kweon, I. S., & Choi, S. (2023). Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization. In Proceedings of CVPR.
Song, J., Park, K., Shin, I., Woo, S., & Kweon, I. S. (2022). Cd-tta: Compound domain test-time adaptation for semantic segmentation. arXiv:2212.08356.
Stan, S., & Rostami, M. (2021). Unsupervised model adaptation for continual semantic segmentation. In Proceedings of AAAI (pp. 2593–2601).
Su, Y., Xu, X., & Jia, K. (2022). Revisiting realistic test-time training: Sequential inference and adaptation by anchored clustering. In Proceedings of NeurIPS (pp. 17543–17555).
Sun, T., Lu, C., & Ling, H. (2022). Prior knowledge guided unsupervised domain adaptation. In Proceedings of ECCV (pp. 639–655).
Sun, T., Lu, C., & Ling, H. (2023). Domain adaptation with adversarial training on penultimate activations. In Proceedings of AAAI.
Sun, Z., Shen, Z., Lin, L., Yu, Y., Yang, Z., Yang, S., & Chen, W. (2022). Dynamic domain generalization. In Proceedings of IJCAI (pp. 1342–1348).
Sun, Y., Tzeng, E., Darrell, T., & Efros, A. A. (2019) Unsupervised domain adaptation through self-supervision. arXiv:1909.11825.
Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A., & Hardt, M. (2020). Test-time training with self-supervision for generalization under distribution shifts. In Proceedings of ICML (pp. 9229–9248).
Tan, Y., Chen, C., Zhuang, W., Dong, X., Lyu, L., & Long, G. (2023). Is heterogeneity notorious? taming heterogeneity to handle test-time shift in federated learning. In Proceedings of NeurIPS.
Tang, S., Shi, Y., Ma, Z., Li, J., Lyu, J., Li, Q., & Zhang, J. (2021). Model adaptation through hypothesis transfer with gradual knowledge distillation. In Proceedings of IROS (pp. 5679–5685).
Tang, Y., Zhang, C., Xu, H., Chen, S., Cheng, J., Leng, L., Guo, Q., & He, Z. (2023). Neuro-modulated Hebbian learning for fully test-time adaptation. In Proceedings of CVPR.
Tanwisuth, K., Fan, X., Zheng, H., Zhang, S., Zhang, H., Chen, B., & Zhou, M. (2021). A prototype-oriented framework for unsupervised domain adaptation. In Proceedings of NeurIPS (pp. 17194–17208).
Tanwisuth, K., Zhang, S., Zheng, H., He, P., & Zhou, M. (2023). Pouf: Prompt-oriented unsupervised fine-tuning for large pre-trained models. In Proceedings of ICML (pp. 33816–33832).
Tarvainen, A., & Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Proceedings of NeurIPS (pp. 1195–1204).
Termöhlen, J.-A., Klingner, M., Brettin, L. J., Schmidt, N. M., & Fingscheidt, T. (2021). Continual unsupervised domain adaptation for semantic segmentation by online frequency domain style transfer. In Proceedings of ITSC (pp. 2881–2888).
Thopalli, K., Turaga, P., & Thiagarajan, J. J. (2023). Domain alignment meets fully test-time adaptation. In Proceedings of ACML (pp. 1006–1021).
Tian, Q., Peng, S., & Ma, T. (2023). Source-free unsupervised domain adaptation with trusted pseudo samples. ACM Transactions on Intelligent Systems and Technology, 14(2), 1–17.
Tian, J., Zhang, J., Li, W., & Xu, D. (2022). Vdm-da: Virtual domain modeling for source data-free domain adaptation. IEEE Transactions on Circuits and Systems for Video Technology, 32(6), 3749–3760.
Tomar, D., Vray, G., Bozorgtabar, B., & Thiran, J.-P. (2023). Tesla: Test-time self-learning with automatic adversarial augmentation. In Proceedings of CVPR.
Tommasi, T., Orabona, F., & Caputo, B. (2013). Learning categories from few examples with multi model knowledge transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 928–941.
Tsai, Y.-Y., Mao, C., Lin, Y.-K., & Yang, J. (2023). Self-supervised convolutional visual prompts. arXiv:2303.00198.
Tzeng, E., Hoffman, J., Saenko, K., & Darrell, T. (2017). Adversarial discriminative domain adaptation. In Proceedings of CVPR (pp. 7167–7176).
Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022.
Valvano, G., Leo, A., & Tsaftaris, S. A. (2022). Re-using adversarial mask discriminators for test-time training under distribution shifts. Journal of Machine Learning for Biomedical Imaging, 1, 1–27.
van de Ven, G. M., Tuytelaars, T., & Tolias, A. S. (2022). Three types of incremental learning. Nature Machine Intelligence, 4, 1185–1197.
van Laarhoven, T., & Marchiori, E. (2017). Unsupervised domain adaptation with random walks on target labelings. arXiv:1706.05335.
Varsavsky, T., Orbes-Arteaga, M., Sudre, C. H., Graham, M. S., Nachev, P., & Cardoso, M. J. (2020). Test-time unsupervised domain adaptation. In Proceedings of MICCAI (pp. 428–436).
Vibashan, V. S., Valanarasu, J. M. J., & Patel, V. M. (2022). Target and task specific source-free domain adaptive image segmentation. arXiv:2203.15792.
Volpi, R., de Jorge, P., Larlus, D., & Csurka, G. (2022). On the road to online adaptation for semantic image segmentation. In Proceedings of CVPR (pp. 19184–19195).
Wang, J.-K., & Wibisono, A. (2023). Towards understanding gd with hard and conjugate pseudo-labels for test-time adaptation. In Proceedings of ICLR.
Wang, Q., Fink, O., Van Gool, L., & Dai, D. (2022). Continual test-time domain adaptation. In Proceedings of CVPR (pp. 7201–7211).
Wang, F., Han, Z., Gong, Y., & Yin, Y. (2022). Exploring domain-invariant parameters for source free domain adaptation. In Proceedings of CVPR (pp. 7151–7160).
Wang, F., Han, Z., Zhang, Z., & Yin, Y. (2022). Active source free domain adaptation. arXiv:2205.10711.
Wang, Y., Huang, Z., & Hong, X. (2022). S-prompts learning with pre-trained transformers: An occam’s razor for domain incremental learning. In Proceedings of NeurIPS (pp. 5682–5695).
Wang, J., Lan, C., Liu, C., Ouyang, Y., Qin, T., Lu, W., Chen, Y., Zeng, W., & Yu, P. (2022). Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering.
Wang, Y., Li, C., Jin, W., Li, R., Zhao, J., Tang, J., & Xie, X. (2022). Test-time training for graph neural networks. arXiv:2210.08813.
Wang, Y., Liang, J., & Zhang, Z. (2022). Source data-free cross-domain semantic segmentation: Align, teach and propagate. arXiv:2106.11653.
Wang, D., Liu, S., Ebrahimi, S., Shelhamer, E., & Darrell, T. (2021). On-target adaptation. arXiv:2109.01087.
Wang, Z., Luo, Y., Zheng, L., Chen, Z., Wang, S., & Huang, Z. (2023). In search of lost online test-time adaptation: A survey. arXiv:2310.20199.
Wang, D., Shelhamer, E., Liu, S., Olshausen, B., & Darrell, T. (2021). Tent: Fully test-time adaptation by entropy minimization. In Proceedings of ICLR.
Wang, D., Shelhamer, E., Olshausen, B., & Darrell, T. (2019). Dynamic scale inference by entropy minimization. arXiv:1908.03182.
Wang, X., Tsvetkov, Y., Ruder, S., & Neubig, G. (2021). Efficient test time adapter ensembling for low-resource language varieties. In EMNLP findings (pp. 730—737).
Wang, Z., Ye, M., Zhu, X., Peng, L., Tian, L., & Zhu, Y. (2022). Metateacher: Coordinating multi-model domain adaptation for medical image classification. In Proceedings of NeurIPS (pp. 20823–20837).
Wang, J., Zhang, J., Bian, Y., Cai, Y., Wang, C. & Pu, S. (2021). Self-domain adaptation for face anti-spoofing. In Proceedings of AAAI (pp. 2746–2754).
Wang, X., Zhuo, J., Cui, S., Wang, S., & Fang, Y. (2024). Learning invariant representation with consistency and diversity for semi-supervised source hypothesis transfer. In Proceedings of ICASSP (pp. 5125–5129).
Wang, S., Wang, J., Xi, H., Zhang, B., Zhang, L., & Wei, H. (2024). Optimization-free test-time adaptation for cross-person activity recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7(4), 1–27.
Wegmann, S., Scattone, F., Carp, I., Gillick, L., Roth, R., & Yamron, J. (1998). Dragon systems’ 1997 broadcast news transcription system. In Proceedings of DARPA broadcast news transcription and understanding workshop.
Wen, Z., Niu, S., Li, G., Wu, Q., Tan, M., & Wu, Q. (2024). Test-time model adaptation for visual question answering with debiased self-supervisions. IEEE Transactions on Multimedia, 26, 2137–2147.
Wilson, G., & Cook, D. J. (2020). A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology, 11(5), 1–46.
Wu, R., Guo, C., Su, Y., & Weinberger, K. Q. (2021). Online adaptation to label distribution shift. In Proceedings of NeurIPS (pp. 11340–11351).
Wu, C., Pan, Y., Li, Y., & Wang, J. Z. (2023). Learning to adapt to online streams with distribution shifts. arXiv:2303.01630.
Wu, Q., Yue, X., & Sangiovanni-Vincentelli, A. (2021). Domain-agnostic test-time adaptation by prototypical training with auxiliary data. In Proceedings of NeurIPS workshops.
Wu, A., Zheng, W.-S., Guo, X., & Lai, J.-H. (2019). Distilled person re-identification: Towards a more scalable system. In Proceedings of CVPR (pp. 1187–1196).
Xia, H., Zhao, H., & Ding, Z. (2021). Adaptive adversarial network for source-free domain adaptation. In Proceedings of ICCV (pp. 9010–9019).
Xia, K., Deng, L., Duch, W., & Wu, D. (2022). Privacy-preserving domain adaptation for motor imagery-based brain-computer interfaces. IEEE Transactions on Biomedical Engineering, 69(11), 3365–3376.
Xiao, Z., Zhen, X., Liao, S., & Snoek, C. G. M. (2023). Energy-based test sample adaptation for domain generalization. In Proceedings of ICLR.
Xiao, Z., Zhen, X., Shao, L., & Snoek, C. G. M. (2022). Learning to generalize across domains on single test samples. In Proceedings of ICLR.
Xie, Q., Dai, Z., Hovy, E., Luong, T., & Le, Q. (2020). Unsupervised data augmentation for consistency training. In Proceedings of NeurIPS (pp. 6256–6268).
Xiong, L., Ye, M., Zhang, D., Gan, Y., & Liu, Y. (2022). Source data-free domain adaptation for a faster R-CNN. Pattern Recognition, 124, 108436.
Xu, B., Liang, J., He, L., & Sun, Z. (2022). Mimic embedding via adaptive aggregation: Learning generalizable person re-identification. In Proceedings of ECCV (pp. 372–388).
Xu, Y., Yang, J., Cao, H., Wu, K., Min, W., & Chen, Z. (2022). Learning temporal consistency for source-free video domain adaptation. In Proceedings of ECCV (pp. 147–164).
Yan, H., Guo, Y., & Yang, C. (2021). Augmented self-labeling for source-free unsupervised domain adaptation. In Proceedings of NeurIPS workshops.
Yan, H., Guo, Y., & Yang, C. (2021). Source-free unsupervised domain adaptation with surrogate data generation. In Proceedings of BMVC.
Yang, Y., & Soatto, S. (2020). FDA: Fourier domain adaptation for semantic segmentation. In Proceedings of CVPR (pp. 4085–4095).
Yang, L., Gao, M., Chen, Z., Xu, R., Shrivastava, A., & Ramaiah, C. (2022). Burn after reading: Online adaptation for cross-domain streaming data. In Proceedings of ECCV (pp. 404–422).
Yang, P., Liang, J., Cao, J., & He, R. (2023). Auto: Adaptive outlier optimization for online test-time ood detection. arXiv:2303.12267.
Yang, J., Peng, X., Wang, K., Zhu, Z., Feng, J., Xie, L., & You, Y. (2023). Divide to adapt: Mitigating confirmation bias for domain adaptation of black-box predictors. In Proceedings of ICLR.
Yang, X., Song, Z., King, I., & Xu, Z. (2022). A survey on deep semi-supervised learning. IEEE Transactions on Knowledge and Data Engineering.
Yang, S., van de Weijer, J., Herranz, L., & Jui, S. (2021). Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In Proceedings of NeurIPS (pp. 29393–29405).
Yang, S., Wang, Y., van de Weijer, J., Herranz, L., & Jui, S. (2021). Generalized source-free domain adaptation. In Proceedings of ICCV (pp. 8978–8987).
Yang, S., Wang, Y., Wang, K., Jui, S., & van de Weijer, J. (2022). One ring to bring them all: Model adaptation under domain and category shift. arXiv:2206.03600.
Yang, J., Yan, R., & Hauptmann, A. G. (2007). Cross-domain video concept detection using adaptive svms. In Proceedings of ACM-MM (pp. 188–197).
Yang, T., Zhou, S., Wang, Y., Lu, Y., & Zheng, N. (2022). Test-time batch normalization. arXiv:2205.10210.
Yang, H., Chen, C., Jiang, M., Liu, Q., Cao, J., Heng, P. A., & Dou, Q. (2022). Dltta: Dynamic learning rate for test-time adaptation on cross-domain medical images. IEEE Transactions on Medical Imaging, 41(12), 3575–3586.
Yang, C., Guo, X., Chen, Z., & Yuan, Y. (2022). Source free domain adaptation for medical image segmentation with fourier style mining. Medical Image Analysis, 79, 102457.
Yang, B., Ma, A. J., & Yuen, P. C. (2022). Revealing task-relevant model memorization for source-protected unsupervised domain adaptation. IEEE Transactions on Information Forensics and Security, 17, 716–731.
Yang, S., Wang, Y., Herranz, L., Jui, S., & van de Weijer, J. (2023). Casting a bait for offline and online source-free domain adaptation. Computer Vision and Image Understanding, 234, 103747.
Yang, B., Yeh, H.-W., Harada, T., & Yuen, P. C. (2021). Model-induced generalization error bound for information-theoretic representation learning in source-data-free unsupervised domain adaptation. IEEE Transactions on Image Processing, 31, 419–432.
Yang, C., & Zhou, J. (2008). Non-stationary data sequence classification using online class priors estimation. Pattern Recognition, 41(8), 2656–2664.
Ye, H., Ding, Y., Li, J., & Ng, H. T. (2022). Robust question answering against distribution shifts with test-time adaptation: An empirical study. In Proceedings of EMNLP findings.
Ye, Y., Liu, Z., Zhang, Y., Li, J., & Shen, H. (2022). Alleviating style sensitivity then adapting: Source-free domain adaptation for medical image segmentation. In Proceedings of ACM-MM (pp. 1935–1944).
Ye, M., Zhang, J., Ouyang, J., & Yuan, D. (2021). Source data-free unsupervised domain adaptation for semantic segmentation. In Proceedings of ACM-MM (pp. 2233–2242).
Yi, L., Xu, G., Xu, P., Li, J., Pu, R., Ling, C., McLeod, A. I., & Wang, B. (2023). When source-free domain adaptation meets learning with noisy labels. In Proceedings of ICLR.
Yi, C., Yang, S., Wang, Y., Li, H., Tan, Y.-P., & Kot, A. (2023). Temporal coherent test-time optimization for robust video classification. In Proceedings of ICLR.
Yin, H., Molchanov, P., Alvarez, J. M., Li, Z., Mallya, A., Hoiem, D., Jha, N. K., & Kautz, J. (2020). Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of CVPR (pp. 8715–8724).
Yoon, J., Hwang, S. J., & Lee, J. (2021). Adversarial purification with score-based generative models. In Proceedings of ICML (pp. 12062–12072).
Yoon, H. S., Yoon, E., Tee, J. T. J., Hasegawa-Johnson, M., Li, Y., & Yoo, C. D. (2024). C-tpt: Calibrated test-time prompt tuning for vision-language models via text feature dispersion. In Proceedings of ICLR.
Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014). How transferable are features in deep neural networks? In Proceedings of NeurIPS (pp. 3320–3328).
You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., & Shen, Y. (2020). Graph contrastive learning with augmentations. In Proceedings of NeurIPS (pp. 5812–5823).
You, F., Li, J., & Zhao, Z. (2021). Test-time batch statistics calibration for covariate shift. arXiv:2110.04065.
You, F., Li, J., Zhu, L., Chen, Z., & Huang, Z. (2021). Domain adaptive semantic segmentation without source data. In Proceedings of ACM-MM (pp. 3293–3302).
You, K., Long, M., Cao, Z., Wang, J., & Jordan, M. I. (2019). Universal domain adaptation. In Proceedings of CVPR (pp. 2720–2729).
Yu, Y., Sheng, L., He, R., & Liang, J. (2023). Benchmarking test-time adaptation against distribution shifts in image classification. arXiv:2307.03133.
Yuan, L., Xie, B., & Li, S. (2023). Robust test-time adaptation in dynamic scenarios. In Proceedings of CVPR (pp. 15922–15932).
Zeng, R., Deng, Q., Xu, H., Niu, S., & Chen, J. (2023). Exploring motion cues for video test-time adaptation. In Proceedings of ACM-MM (pp. 1840–1850).
Zeng, L., Han, J., Liang, D., & Ding, W. (2024). Rethinking precision of pseudo label: Test-time adaptation via complementary learning. Pattern Recognition Letters, 177, 96–102.
Zhang, Z., Chen, W., Cheng, H., Li, Z., Li, S., Lin, L., & Li, G. (2022). Divide and contrast: Source-free domain adaptation via adaptive contrastive learning. In Proceedings of NeurIPS (pp. 5137–5149).
Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018). mixup: Beyond empirical risk minimization. In Proceedings of ICLR.
Zhang, R., Isola, P., & Efros, A. A. (2016). Colorful image colorization. In Proceedings of ECCV (pp. 649–666).
Zhang, M., Levine, S., & Finn, C. (2022). Memo: Test time robustness via adaptation and augmentation. In Proceedings of NeurIPS (pp. 38629–38642).
Zhang, M., Marklund, H., Dhawan, N., Gupta, A., Levine, S., & Finn, C. (2021). Adaptive risk minimization: Learning to adapt to domain shift. In Proceedings of NeurIPS (pp. 23664–23678).
Zhang, J., Nie, X., & Feng, J. (2020). Inference stage optimization for cross-scenario 3d human pose estimation. In Proceedings of NeurIPS (pp. 2408–2419).
Zhang, Y.-F., Wang, J., Liang, J., Zhang, Z., Yu, B., Wang, L., Tao, D., & Xie, X. (2023). Domain-specific risk minimization for out-of-distribution generalization. In Proceedings of KDD (pp. 3409–3421).
Zhang, T., Xiang, Y., Li, X., Weng, Z., Chen, Z., & Fu, Y. (2022). Free lunch for cross-domain occluded face recognition without source data. In Proceedings of ICASSP (pp. 2944–2948).
Zhang, D., Ye, M., Xiong, L., Li, S., & Li, X. (2021). Source-style transferred mean teacher for source-data free object detection. In ACM Multimedia Asia (pp. 1–8).
Zhang, H., Zhang, Y., Jia, K., & Zhang, L. (2021). Unsupervised domain adaptation of black-box source models. In Proceedings of BMVC.
Zhang, B., Zhang, X., Liu, Y., Cheng, L., & Li, Z. (2021). Matching distributions between model and data: Cross-domain knowledge distillation for unsupervised domain adaptation. In Proceedings of ACL (pp. 5423–5433).
Zhang, X., & Chen, Y.-C. (2023). Adaptive domain generalization via online disagreement minimization. IEEE Transactions on Image Processing, 32, 4247–4258.
Zhang, J., Qi, L., Shi, Y., & Gao, Y. (2022). Generalizable model-agnostic semantic segmentation via target-specific normalization. Pattern Recognition, 122, 108292.
Zhao, B., Chen, C., & Xia, S.-T. (2023). Delta: Degradation-free fully test-time adaptation. In Proceedings of ICLR.
Zhao, H., Liu, Y., Alahi, A., & Lin, T. (2023). On pitfalls of test-time adaptation. In Proceedings of ICML (pp. 42058–42080).
Zhao, X., Liu, C., Sicilia, A., Hwang, S. J., & Fu, Y. (2022). Test-time fourier style calibration for domain generalization. In Proceedings of IJCAI (pp. 1721–1727).
Zhao, S., Wang, X., Zhu, L., & Yang, Y. (2024). Test-time adaptation with clip reward for zero-shot generalization in vision-language models. In Proceedings of ICLR.
Zhou, A., & Levine, S. (2021). Bayesian adaptation for covariate shift. In Proceedings of NeurIPS (pp. 914–927).
Zhou, K., Liu, Z., Qiao, Y., Xiang, T., & Loy, C. C. (2022). Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Zhou, Y., Ren, J., Li, F., Zabih, R., & Lim, S. N. (2023). Test-time distribution normalization for contrastively learned visual-language models. In Proceedings of NeurIPS.
Zhou, Q., Zhang, K.-Y., Yao, T., Yi, R., Sheng, K., Ding, S., & Ma, L. (2022). Generative domain adaptation for face anti-spoofing. In Proceedings of ECCV (pp. 335–356).
Zhu, W., Huang, Y., Xu, D., Qian, Z., Fan, W., & Xie, X. (2021). Test-time training for deformable multi-scale image registration. In Proceedings of ICRA (pp. 13618–13625).
Zou, Y., Yu, Z., Kumar, B. V. K., & Wang, J. (2018). Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of ECCV (pp. 289–305).
Zou, Y., Zhang, Z., Li, C.-L., Zhang, H., Pfister, T., & Huang, J.-B. (2022). Learning instance-specific adaptation for cross-domain segmentation. In Proceedings of ECCV (pp. 459–476).
Acknowledgements
We sincerely thank the editor and anonymous reviewers for their constructive comments on this work. We also thank Lijun Sheng for his valuable feedback on this work. This work was funded by the Beijing Nova Program (No. Z211100002121108), the Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and the National Natural Science Foundation of China under (No. 62276256).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Hong Liu.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liang, J., He, R. & Tan, T. A Comprehensive Survey on Test-Time Adaptation Under Distribution Shifts. Int J Comput Vis 133, 31–64 (2025). https://doi.org/10.1007/s11263-024-02181-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-024-02181-w