Domain-Adaptive Prototype-Recalibrated Network with Transductive Learning Paradigm for Intelligent Fault Diagnosis under Various Limited Data Conditions
Abstract
:1. Introduction
- (1).
- To address fault diagnosis with limited data, an innovative end-to-end DAPRN which is made up of a feature extractor, a domain discriminator, and a label predictor is presented. In the training process, the feature extractor learns a representation space by a hybrid training strategy combined with the minimization of few-shot classification loss and the maximization of domain-discriminative loss. In the testing process, the label predictor with recalibrated prototypes can recognize the health conditions of target samples using the generalized meta-knowledge of source diagnostic tasks.
- (2).
- The structure of the feature extractor is appropriately discussed. In addition, to explore whether and how the data capacity and category richness of the source dataset regulate the performance of few-shot fault diagnosis, a series of experiments are designed and carried out. The details of experimental results are analyzed and discussed thoroughly.
- (3).
- To test the validity and superiority of the DAPRN, extensive few-shot fault diagnosis tasks of rolling element bearings and planetary gearboxes under various limited data conditions are conducted. Compared with existing popular FSL methods, in-depth quantitative and qualitative analysis convincingly demonstrates that the proposed method significantly improves the performance of the few-shot fault diagnosis. In addition, ablation studies are implemented to further verify the advance of the proposed method.
2. Background Knowledge
2.1. Few-Shot Learning for Fault Diagnosis
2.2. Prototypical Network Based on Metric Learning
2.3. Transductive Few-Shot Learning Paradigm
3. Proposed Method
3.1. The Architecture of Proposed DAPRN
3.2. Optimization Objective Function
3.3. Prototype Recalibration Strategy
3.4. Transductive Training and Testing Method
Algorithm 1 DAPRN for few-shot fault diagnosis |
Transductive training procedure |
Input: For a -way -shot fault diagnosis task, base set in source domain is spilled as a labeled subset . A target-labeled support set , a target-unlabeled query set , number of epoch , number of episodes , learning rate , penalty term Output: trained feature extractor |
1. Randomly initialize the parameters of , and 2. For = 1 to 3. Randomly select out of in , then obtain and 4. For = 1 to 5. Sample few-shot tasks s from and , then fed them into and 6. Calculate few-shot classification loss 7. Sample few-shot tasks s from and , and fed them into 8. Input representation vectors of source and target domain to 9. Compute the training progress 10. Calculate domain-discriminative loss 11. Backpropagation with for 12. Optimize the parameters of and as follows and 13. End 14. End |
Few-shot fault diagnosis based on PRS |
Input: A target-labeled support set , a target-unlabeled query set , number of episodes , number of , and trained feature extractor Output: Prediction results and average accuracy |
1. For = 1 to |
2. Sample few-shot tasks s from and , and fed them into 3. Recalibrate naïve prototypes by PRS as follow 4. Obtain health conditions of by refined with |
5. End |
4. Experimental Validation
4.1. Dataset Description and Experimental Setup
4.1.1. Bearing Dataset
4.1.2. Gearbox Dataset
4.1.3. Implementation Details
4.1.4. Comparative Methods
- Baseline [25]: A two-stage Baseline model consists of pretraining on the base set and fine-tuning on the support set. In the pre-training stage, the Baseline model is composed of a feature extractor and a base class classifier. The novel class classifier composed of stacked FC layers is used to replace the base class classifier during the fine-tuning stage.
- BaselinePlus [25]: The BaselinePlus model is the same as the Baseline model except for the novel classifier. In detail, the classifier, which explicitly reduces the intraclass variations by using a cosine-similarity classification structure, is designed to recognize the health conditions of machinery.
- SiameseNet [44]: The SiameseNet model, which is composed of a feature extractor and a similarity-measurement module based on a deep neural network, is trained with input being sample pairs of the same or different health conditions of machinery. Note that the sample pairs are randomly selected during the training process.
- MAML [43]: The MAML model, which is both agnostic to the structure of the feature extractor and loss function, is a bilevel learning paradigm (i.e., inner loop optimization and outer-loop optimization) for meta-knowledge transfer. In detail, the parameters of MAML are quickly updated by inner-loop and outer-loop optimization.
- ProtoNet [27]: The ProtoNet model, which includes a feature extractor and a Euclidean distance-based label predictor, identifies the health conditions of machinery by using the naïve prototypes of the source domain.
- MatchingNet [45]: The MatchingNet model, in which the representations of support and query samples are obtained by two independent feature extractors embedded with LSTM, recognizes the health conditions by an attention-based label predictor.
- RelationNet [46]: The RelationNet model is built on a feature extractor without the last two max-pooling layers and a deep network for metric learning. Specifically, a two-layer CNN is trained for learning a metric space for few-shot fault diagnosis tasks.
4.2. Case Study
4.2.1. Situation A: Transfer Learning Scenarios with Limited Data
4.2.2. Situation B: Cross-Domain Few-Shot Fault Diagnosis
4.3. The Structure of Feature Extractor
4.4. Ablation and Parameter Sensitivity Analysis
- NoDA: This model is designed to examine whether the domain discriminator can improve the generalization ability from the source task to the target task. Hence, a DAPRN model without the domain discriminator is conducted for comparison.
- NoPR: To describe the effect of the PRS-based meta-testing module, a DAPRN model is implemented where the prototype calibration is cut off. It should be pointed out that the NoPR model is trained the same as the DAPRN model and tested without PRS.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ren, Z.; Zhu, Y.; Yan, K.; Chen, K.; Kang, W.; Yue, Y.; Gao, D. A novel model with the ability of few-shot learning and quick updating for intelligent fault diagnosis. Mech. Syst. Signal Process. 2020, 138, 106608. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Zhang, S.; Wang, B. Learning a superficial correlated representation using a local mapping strategy for bearing performance degradation assessment. Meas. Sci. Technol. 2021, 32, 065003. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Tao, T.; Yang, C.; Wei, F. Deep Joint Convolutional Neural Network with Double-Level Attention Mechanism for Multi-Sensor Bearing Performance Degradation Assessment. In Proceedings of the 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 22–24 December 2021; pp. 1–9. [Google Scholar]
- Zhao, X.; Jia, M.; Liu, Z. Semisupervised deep sparse auto-encoder with local and nonlocal information for intelligent fault diagnosis of rotating machinery. IEEE Trans. Instrum. Meas. 2020, 70, 3501413. [Google Scholar] [CrossRef]
- Lee, K.B.; Cheon, S.; Kim, C.O. A convolutional neural network for fault classification and diagnosis in semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 2017, 30, 135–142. [Google Scholar] [CrossRef]
- Jiao, J.; Zhao, M.; Lin, J.; Ding, C. Deep coupled dense convolutional network with complementary data for intelligent fault diagnosis. IEEE Trans. Ind. Electron. 2019, 66, 9858–9867. [Google Scholar] [CrossRef]
- Li, X.; Zhang, W.; Ding, Q. Cross-domain fault diagnosis of rolling element bearings using deep generative neural networks. IEEE Trans. Ind. Electron. 2018, 66, 5525–5534. [Google Scholar] [CrossRef]
- Pan, T.; Chen, J.; Xie, J.; Chang, Y.; Zhou, Z. Intelligent fault identification for industrial automation system via multi-scale convolutional generative adversarial network with partially labeled samples. ISA Trans. 2020, 101, 379–389. [Google Scholar] [CrossRef]
- Yu, X.; Tang, B.; Zhang, K. Fault diagnosis of wind turbine gearbox using a novel method of fast deep graph convolutional networks. IEEE Trans. Instrum. Meas. 2021, 70, 3501111. [Google Scholar] [CrossRef]
- Zhao, H.; Sun, S.; Jin, B. Sequential fault diagnosis based on LSTM neural network. IEEE Access 2018, 6, 12929–12939. [Google Scholar] [CrossRef]
- Yu, J.; Liu, G. Knowledge extraction and insertion to deep belief network for gearbox fault diagnosis. Knowl. Based Syst. 2020, 197, 105883. [Google Scholar] [CrossRef]
- Li, Q.; Shen, C.; Chen, L.; Zhu, Z. Knowledge mapping-based adversarial domain adaptation: A novel fault diagnosis method with high generalizability under variable working conditions. Mech. Syst. Signal Process. 2021, 147, 107095. [Google Scholar] [CrossRef]
- Jiao, J.; Lin, J.; Zhao, M.; Liang, K. Double-level adversarial domain adaptation network for intelligent fault diagnosis. Knowl. Based Syst. 2020, 205, 106236. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Tao, T.; Wu, Q. Class-Imbalance Adversarial Transfer Learning Network for Cross-domain Fault Diagnosis with Imbalanced Data. IEEE Trans. Instrum. Meas. 2021, 71, 3501111. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Zhang, S.; Han, C.; Wu, Q.; Wei, F. Prototype-guided bi-level adversarial domain adaptation network for intelligent fault diagnosis of rotating machinery under various working conditions. Meas. Sci. Technol. 2022, 33, 115014. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Tao, T.; Zhang, S. Self-supervised bi-classifier adversarial transfer network for cross-domain fault diagnosis of rotating machinery. ISA Trans. 2022, in press. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Z.; Zhang, Q.; Yu, X.; Sun, C.; Wang, S.; Yan, R.; Chen, X. Applications of unsupervised deep transfer learning to intelligent fault diagnosis: A survey and comparative study. IEEE Trans. Instrum. Meas. 2021, 70, 3525828. [Google Scholar] [CrossRef]
- Li, W.; Huang, R.; Li, J.; Liao, Y.; Chen, Z.; He, G.; Yan, R.; Gryllias, K. A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges. Mech. Syst. Signal Process. 2022, 167, 108487. [Google Scholar] [CrossRef]
- Yang, B.; Lee, C.-G.; Lei, Y.; Li, N.; Lu, N. Deep partial transfer learning network: A method to selectively transfer diagnostic knowledge across related machines. Mech. Syst. Signal Process. 2021, 156, 107618. [Google Scholar] [CrossRef]
- Deng, Y.; Huang, D.; Du, S.; Li, G.; Zhao, C.; Lv, J. A double-layer attention based adversarial network for partial transfer learning in machinery fault diagnosis. Comput. Ind. 2021, 127, 103399. [Google Scholar] [CrossRef]
- Kuang, J.; Xu, G.; Zhang, S.; Tao, T.; Wei, F.; Yu, Y. A deep partial adversarial transfer learning network for cross-domain fault diagnosis of machinery. In Proceedings of the 2022 Prognostics and Health Management Conference (PHM-2022 London), London, UK, 27–29 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 507–512. [Google Scholar]
- Kuang, J.; Xu, G.; Tao, T.; Wu, Q.; Han, C.; Wei, F. Dual-weight Consistency-induced Partial Domain Adaptation Network for Intelligent Fault Diagnosis of Machinery. IEEE Trans. Instrum. Meas. 2022, 71, 3519612. [Google Scholar] [CrossRef]
- Zhao, C.; Shen, W. Dual adversarial network for cross-domain open set fault diagnosis. Reliab. Eng. Syst. Saf. 2022, 221, 108358. [Google Scholar] [CrossRef]
- Mao, G.; Li, Y.; Jia, S.; Noman, K. Interactive dual adversarial neural network framework: An open-set domain adaptation intelligent fault diagnosis method of rotating machinery. Measurement 2022, 195, 111125. [Google Scholar] [CrossRef]
- Chen, W.-Y.; Liu, Y.-C.; Kira, Z.; Wang, Y.-C.F.; Huang, J.-B. A closer look at few-shot classification. arXiv 2019, arXiv:1904.04232. [Google Scholar]
- Ravi, S.; Larochelle, H. Optimization as a Model for Few-Shot Learning. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Snell, J.; Swersky, K.; Zemel, R.S. Prototypical networks for few-shot learning. arXiv 2017, arXiv:1703.05175. [Google Scholar]
- Kaya, M.; Bilge, H.Ş. Deep metric learning: A survey. Symmetry 2019, 11, 1066. [Google Scholar] [CrossRef]
- Boudiaf, M.; Masud, Z.I.; Rony, J.; Dolz, J.; Piantanida, P.; Ayed, I.B. Transductive information maximization for few-shot learning. arXiv 2020, arXiv:2008.11297. [Google Scholar]
- Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
- Hou, R.; Chang, H.; Ma, B.; Shan, S.; Chen, X. Cross attention network for few-shot classification. arXiv 2019, arXiv:1910.07677. [Google Scholar]
- Dhillon, G.S.; Chaudhari, P.; Ravichandran, A.; Soatto, S. A baseline for few-shot image classification. arXiv 2019, arXiv:1909.02729. [Google Scholar]
- Kim, J.; Kim, T.; Kim, S.; Yoo, C.D. Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 11–20. [Google Scholar]
- Ma, Y.; Bai, S.; An, S.; Liu, W.; Liu, A.; Zhen, X.; Liu, X. Transductive Relation-Propagation Network for Few-shot Learning. In Proceedings of the 2020 International Joint Conference on Artificial Intelligence, Yokohama, Japan, 11–17 July 2020; pp. 804–810. [Google Scholar]
- Qiao, L.; Shi, Y.; Li, J.; Wang, Y.; Huang, T.; Tian, Y. Transductive episodic-wise adaptive metric for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 3603–3612. [Google Scholar]
- Hao, S.; Ge, F.-X.; Li, Y.; Jiang, J. Multisensor bearing fault diagnosis based on one-dimensional convolutional long short-term memory networks. Measurement 2020, 159, 107802. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, M.; Xu, Y.; Lu, W.; Yang, J.; Zhang, T. Metric-based meta-learning model for few-shot fault diagnosis under multiple limited data conditions. Mech. Syst. Signal Process. 2021, 155, 107510. [Google Scholar] [CrossRef]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 1180–1189. [Google Scholar]
- Smith, W.A.; Randall, R.B. Rolling element bearing diagnostics using the Case Western Reserve University data: A benchmark study. Mech. Syst. Signal Process. 2015, 64, 100–131. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wu, J.; Zhao, Z.; Sun, C.; Yan, R.; Chen, X. Few-shot transfer learning for intelligent fault diagnosis of machine. Measurement 2020, 166, 108202. [Google Scholar] [CrossRef]
- Zhang, S.; Ye, F.; Wang, B.; Habetler, T. Few-Shot Bearing Fault Diagnosis Based on Model-Agnostic Meta-Learning. IEEE Trans. Ind. Appl. 2021, 57, 4754–4764. [Google Scholar] [CrossRef]
- Zhang, A.; Li, S.; Cui, Y.; Yang, W.; Dong, R.; Hu, J. Limited data rolling bearing fault diagnosis with few-shot learning. IEEE Access 2019, 7, 110895–110904. [Google Scholar] [CrossRef]
- Vinyals, O.; Blundell, C.; Lillicrap, T.; Wierstra, D. Matching networks for one shot learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3630–3638. [Google Scholar]
- Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P.H.S.; Hospedales, T.M. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1199–1208. [Google Scholar]
Methods | Domains | Source and Target Task | Source Label | Target Label |
---|---|---|---|---|
Traditional DL | Same | Same | Available | Available |
TL | Different | Same or related | Available | Unavailable |
FSL | Same or different | Different | Available | Limited labels |
Bearing Health Condition | Fault Diameter (in.) | The Number of Samples | Category Abbreviation |
---|---|---|---|
Normal | / | 200 | N |
Inner race fault | 0.007 | 200 | I1 |
0.014 | 200 | I2 | |
0.021 | 200 | I3 | |
Ball fault | 0.007 | 200 | B1 |
0.014 | 200 | B2 | |
0.021 | 200 | B3 | |
Outer race fault | 0.007 | 200 | O1 |
0.014 | 200 | O2 | |
0.021 | 200 | O3 |
Gearbox Health Condition | Fault Diameter (mm) | The Number of Samples | Category Abbreviation |
---|---|---|---|
Normal | / | 200 | Nor |
Sun gear fault | 1 | 200 | SF |
Planet gear fault | 1 | 200 | PF |
Ring gear fault | 1 | 200 | RF |
Symbol | Layer | Output Size | Parameter | #Param |
---|---|---|---|---|
Input | Input | [1 × 1024] | / | / |
Layer1 (ConvB1) | Conv1D | [8 × 505] | kernel size = 16, stride = 2 | 136 |
BN-ReLU | [8 × 505] | / | 16 | |
MaxPool1D | [8 × 252] | kernel size = 2, stride = 2 | / | |
Layer2 (ConvB2) | Conv1D | [16 × 125] | kernel size = 3, stride = 2 | 400 |
BN-ReLU | [16 × 125] | / | 32 | |
MaxPool1D | [16 × 62] | kernel size = 2, stride = 2 | / | |
Layer3 (ConvB3) | Conv1D | [32 × 30] | kernel size = 3, stride = 2 | 1568 |
BN-ReLU | [32 × 30] | / | 64 | |
MaxPool1D | [32 × 15] | kernel size = 2, stride = 2 | / | |
Layer4 (ConvB4) | Conv1D | [64 × 7] | kernel size = 3, stride = 2 | 6208 |
BN-ReLU | [64 × 7] | / | 128 | |
MaxPool1D | [64 × 3] | kernel size = 2, stride = 2 | / | |
Output | Flatten | 192 | / | / |
Symbol | Layer | Output Size | Parameter | #Param |
---|---|---|---|---|
Layer1 | FC1 | 64 | In_features = 192, out_features = 64 | 12,352 |
Layer2 | ReLU | 64 | / | / |
Layer3 | FC2 | 2 | In_features = 64, out_features = 2 | 130 |
Parameters | Value | Parameters | Value |
---|---|---|---|
Learning rate | 0.001 | Support samples per category (K) | 1, 3, 5, or 10 |
Decay rate | 0.1 | Query samples per category (M) | 200 |
Maximum epochs | 50 | Episodes of source and target tasks | 100 |
Decay epoch | 15, 30 | Z of PRS | 20 |
Dataset | Source Domain | Target Domain | Source Categories | Target Categories | Task Abbr. |
---|---|---|---|---|---|
Bearing | 0 hp | 1 hp | All health conditions | All health conditions | AB1 |
Bearing | 1 hp | 2 hp | All health conditions | All health conditions | AB2 |
Bearing | 2 hp | 3 hp | All health conditions | All health conditions | AB3 |
Gearbox | 50 Nm | 150 Nm | All health conditions | All health conditions | AG1 |
Gearbox | 150 Nm | 250 Nm | All health conditions | All health conditions | AG2 |
Gearbox | 250 Nm | 50 Nm | All health conditions | All health conditions | AG3 |
Methods | AB1 | AB2 | AB3 | AB4 | AB5 | AB6 |
---|---|---|---|---|---|---|
Baseline | 81.06% | 81.01% | 81.41% | 93.72% | 92.46% | 90.08% |
BaselinePlus | 86.03% | 86.43% | 85.28% | 96.23% | 96.11% | 95.98% |
SiameseNet | 79.71% | 80.34% | 79.87% | 97.15% | 95.71% | 96.39% |
MAML | 78.99% | 80.14% | 82.37% | 95.73% | 95.37% | 95.13% |
ProtoNet | 83.86% | 83.11% | 84.28% | 82.74% | 82.25% | 81.61% |
MatchingNet | 82.64% | 85.95% | 84.79% | 87.22% | 88.17% | 85.45% |
RelationNet | 81.95% | 83.79% | 83.67% | 88.05% | 87.12% | 87.76% |
DAPRN | 88.16% | 89.94% | 89.19% | 99.98% | 98.94% | 98.51% |
Dataset | Source Domain | Target Domain | Source Categories | Target Categories | Task Abbr. |
---|---|---|---|---|---|
Bearing | 0 hp | 1 hp | N, I1, I2, I3, B1, B2, B3 | O1, O2, O3 | BB1 |
Bearing | 1 hp | 2 hp | N, I2, I3, B2, B3, O2, O3 | I1, O1, B1 | BB2 |
Bearing | 2 hp | 3 hp | N, I1, B1, B2, B3, O1 | I2, I3, O2, O3 | BB3 |
Gearbox | 50 Nm | 150 Nm | Nor, SF | PF, RF | BG1 |
Gearbox | 150 Nm | 250 Nm | Nor, SF, PF | Nor, PF, RF | BG2 |
Gearbox | 250 Nm | 50 Nm | Nor, SF, PF, RF | SF, PF, RF | BG3 |
Methods | AB1 | AB2 | AB3 | AB4 | AB5 | AB6 |
---|---|---|---|---|---|---|
Baseline | 80.57% | 96.82% | 60.05% | 53.27% | 79.56% | 88.94% |
BaselinePlus | 82.75% | 97.65% | 57.91% | 52.26% | 76.05% | 95.64% |
SiameseNet | 89.37% | 96.58% | 88.49% | 71.14% | 78.80% | 89.92% |
MAML | 93.39% | 91.60% | 69.28% | 65.21% | 66.40% | 92.05% |
ProtoNet | 80.19% | 94.64% | 75.01% | 56.14% | 68.66% | 86.15% |
MatchingNet | 82.84% | 96.53% | 70.68% | 81.10% | 69.29% | 89.26% |
RelationNet | 92.16% | 90.27% | 85.37% | 67.33% | 75.80% | 95.11% |
DAPRN | 99.80% | 99.71% | 94.02% | 98.65% | 90.19% | 96.53% |
Depth of Feature Extractor | 1 | 2 | 3 | 4 |
---|---|---|---|---|
Accuracy on task BB1 | 61.56% ± 3.47% | 79.71% ± 2.13% | 86.87% ± 1.36% | 99.44% ± 0.79% |
Accuracy on task BG1 | 51.00% ± 0.94% | 62.68% ± 3.69% | 93.35% ± 1.87% | 98.09% ± 0.80% |
#Param | 152 | 584 | 2216 | 8552 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kuang, J.; Tao, T.; Wu, Q.; Han, C.; Wei, F.; Chen, S.; Zhou, W.; Yan, C.; Xu, G. Domain-Adaptive Prototype-Recalibrated Network with Transductive Learning Paradigm for Intelligent Fault Diagnosis under Various Limited Data Conditions. Sensors 2022, 22, 6535. https://doi.org/10.3390/s22176535
Kuang J, Tao T, Wu Q, Han C, Wei F, Chen S, Zhou W, Yan C, Xu G. Domain-Adaptive Prototype-Recalibrated Network with Transductive Learning Paradigm for Intelligent Fault Diagnosis under Various Limited Data Conditions. Sensors. 2022; 22(17):6535. https://doi.org/10.3390/s22176535
Chicago/Turabian StyleKuang, Jiachen, Tangfei Tao, Qingqiang Wu, Chengcheng Han, Fan Wei, Shengchao Chen, Wenjie Zhou, Cong Yan, and Guanghua Xu. 2022. "Domain-Adaptive Prototype-Recalibrated Network with Transductive Learning Paradigm for Intelligent Fault Diagnosis under Various Limited Data Conditions" Sensors 22, no. 17: 6535. https://doi.org/10.3390/s22176535