Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Then, we propose a general strategy named mixup inference in training, which adopts a simple decoupling principle for recovering the outputs of raw samples at ...
By simply taking convex combinations between pairs of samples and their labels, mixup training has been shown to easily improve predictive accuracy.
Our experiments show this strategy properly solves mixup's calibration issue without sacrificing the predictive performance, while even improves accuracy than ...
To mitigate this problem, we first investigate the mixup inference strategy and found that despite it improves calibration on mixup, this ensemble-like strategy ...
People also ask
This is the implementation of our CVPR'23 paper On the Pitfall of Mixup for Uncertainty Calibration. In the paper, we conduct a series of empirical studies ...
Our experiments show this strategy properly solves mixup's calibration issue without sacrificing the predictive performance, while even improves accuracy than ...
The results show that the influence is small and the maximum uncertainty is less than 1nm. The lessons we learned in this study provide helpful information and ...
Appendix for: On the Pitfall of Mixup for Uncertainty Calibration. Contents. 1. Calibration Metrics. 1. 2. Temperature Scaling. 2. 3. Details of our Approaches.
Mixup is a popular technique used in machine learning to improve the accuracy of models. However, it has been found that mixup can make the models less ...
This is the implementation of our CVPR'23 paper On the Pitfall of Mixup for Uncertainty Calibration. In the paper, we conduct a series of empirical studies ...