Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Abstract: Vision transformers (ViTs) that leverage self-attention mechanism have shown superior performance on many classical vision tasks compared to convolutional neural networks (CNNs) and gain increasing popularity recently.
Oct 5, 2023
Apr 25, 2023 · Abstract—Vision Transformers (ViTs) that leverage self- attention mechanism have shown superior performance on many.
The proposed LB-ABFT and range-based approach are applied to protect ViTs against various soft errors. The accuracy loss relative to a clean model is set to be ...
Jan 31, 2024 · The investigation reveals that ViTs with the self-attention mechanism are generally more resilient on linear computing including general matrix– ...
People also ask
Soft Error Reliability Analysis of Vision Transformers. Xue, X., Liu, C., Wang, Y., Yang, B., Luo, T., Zhang, L., Li, H., & Li, X. IEEE Transactions on Very ...
In this work, we propose Trade-off between Robustness and Accuracy of Vision Transformers (TORA-ViTs) for utility and reliability at the same time. TORA-ViTs ...
To enable this, we first perform a soft error vulnerability analysis of every fully connected layers in Transformer computations. Based on this study, error ...
Jun 11, 2024 · Research has shown that DNNs are prone to soft errors, with instances of single-bit flips leading to faulty inferences [21, 22, 23, 18] .
Sep 7, 2020 · In this work, we propose an analytical model named SERN to analyze the soft error reliability of CNNs, which requires only a small number of ...
Independent of this transformation, users will expect the same (or higher) level of reliability of the infrastructure. In this paper, we consider physical ...