Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Multi-Branch Enhanced Discriminative Network for Vehicle Re-Identification

Published: 17 October 2023 Publication History

Abstract

Vehicle re-identification (ReID) is the task of identifying the same vehicle across numerous cameras. This is a complex classification task, and the fine-grained information and strong discrimination features have proven to be effective in handling the re-identification classification task. However, most existing methods focuses on extracting local area features in combination with global features, while exploring subtle distinguishing features, which is a difficult task, remains an open problem and unsolved. In this paper, we propose a multi-branch enhanced discriminative network (MED) to better extract subtle distinguishing features that have high discriminative power to improve the ReID performance. In the proposed MED method, each feature map obtained by convolutional neural network (CNN) is divided into 4 spatial sub-maps, on each of which, the vertical and the horizontal branches are used to extract the subtle distinguishing features intrinsically contained in sub-areas. The vertical and the horizontal branches are combined with the global branch to perform the ReID task. Moreover, our proposed method is capable of extracting rich fine-grained features without the need of extra manual annotation while maintaining a simple design structure. We conducted extensive experiments on the vehicle ReID datasets (VehicleID and VeRi-776), showing that the proposed MED method outperforms most existing methods. Further, we directly apply the MED method to the pedestrian ReID problem on the Market-1501, DUKEMTMC, and MSMT17 datasets, achieving the state-of-the-art (SOTA) performance as well. This demonstrates that the proposed method has good generality and can be flexibly applied to the ReID tasks.

References

[1]
H. F. Yang, J. Cai, C. Liu, R. Ke, and Y. Wang, “Cooperative multi-camera vehicle tracking and traffic surveillance with edge artificial intelligence and representation learning,” Transp. Res. C, Emerg. Technol., vol. 148, Mar. 2023, Art. no.
[2]
H. Yang, J. Cai, M. Zhu, C. Liu, and Y. Wang, “Traffic-informed multi-camera sensing (TIMS) system based on vehicle re-identification,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 17189–17200, Oct. 2022.
[3]
R. Quispe, C. Lan, W. Zeng, and H. Pedrini, “AttributeNet: Attribute enhanced vehicle re-identification,” Neurocomputing, vol. 465, pp. 84–92, Nov. 2021.
[4]
H. Wang, J. Peng, G. Jiang, F. Xu, and X. Fu, “Discriminative feature and dictionary learning with part-aware model for vehicle re-identification,” Neurocomputing, vol. 438, pp. 55–62, May 2021.
[5]
H. Han, M. Zhou, X. Shang, W. Cao, and A. Abusorrah, “KISS+ for rapid and accurate pedestrian re-identification,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 1, pp. 394–403, Jan. 2021.
[6]
H. Chen, B. Lagadec, and F. Bremond, “Learning discriminative and generalizable representations by spatial-channel partition for person re-identification,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV), Mar. 2020, pp. 2472–2481.
[7]
X. Ning, K. Gong, W. Li, L. Zhang, X. Bai, and S. Tian, “Feature refinement and filter network for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 9, pp. 3391–3402, Sep. 2021.
[8]
C. Shenet al., “Sharp attention network via adaptive sampling for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 10, pp. 3016–3027, Oct. 2019.
[9]
P. Khorramshahi, N. Peri, J.-C. Chen, and R. Chellappa, “The devil is in the details: Self-supervised attention for vehicle re-identification,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Nov. 2020, pp. 369–386.
[10]
J. Siet al., “Dual attention matching network for context-aware feature sequence based person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 5363–5372.
[11]
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 7132–7141.
[12]
T.-W. Huang, J. Cai, H. Yang, H.-M. Hsu, and J.-N. Hwang, “Multi-view vehicle re-identification using temporal attention model and metadata re-ranking,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), vol. 2, Jun. 2019, pp. 434–442.
[13]
J. Lian, D. Wang, S. Zhu, Y. Wu, and C. Li, “Transformer-based attention network for vehicle re-identification,” Electronics, vol. 11, no. 7, p. 1016, Mar. 2022.
[14]
D. Menget al., “Parsing-based view-aware embedding network for vehicle re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 7101–7110.
[15]
B. He, J. Li, Y. Zhao, and Y. Tian, “Part-regularized near-duplicate vehicle re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 3992–4000.
[16]
T.-S. Chen, C.-T. Liu, C.-W. Wu, and S.-Y. Chien, “Orientation-aware vehicle re-identification with semantics-guided part attention network,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Nov. 2020, pp. 330–346.
[17]
P. Khorramshahi, A. Kumar, N. Peri, S. S. Rambhatla, J.-C. Chen, and R. Chellappa, “A dual-path model with adaptive attention for vehicle re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 6131–6140.
[18]
D. Meng, L. Li, X. Liu, L. Gao, and Q. Huang, “Viewpoint alignment and discriminative parts enhancement in 3D space for vehicle ReID,” IEEE Trans. Multimedia, vol. 25, pp. 2954–2965, 2022.
[19]
Y. Chen, H. Wang, X. Sun, B. Fan, C. Tang, and H. Zeng, “Deep attention aware feature learning for person re-identification,” Pattern Recognit., vol. 126, Jun. 2022, Art. no.
[20]
M. Li, X. Huang, and Z. Zhang, “Self-supervised geometric features discovery via interpretable attention for vehicle re-identification and beyond,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 194–204.
[21]
G. Wang, Y. Yuan, X. Chen, J. Li, and X. Zhou, “Learning discriminative features with multiple granularities for person re-identification,” in Proc. 26th ACM Int. Conf. Multimedia (MM), S. Bollet al., Eds., Oct. 2018, pp. 274–282.
[22]
K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3701–3711.
[23]
Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2018, pp. 501–508.
[24]
H. Luo, Y. Gu, X. Liao, S. Lai, and W. Jiang, “Bag of tricks and a strong baseline for deep person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2019, pp. 1487–1495.
[25]
T. Chenet al., “ABD-Net: Attentive but diverse person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8350–8360.
[26]
Z. Zheng, L. Zheng, and Y. Yang, “Pedestrian alignment network for large-scale person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 10, pp. 3037–3045, Oct. 2019.
[27]
X. Liu, S. Zhang, Q. Huang, and W. Gao, “RAM: A region-aware deep model for vehicle re-identification,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2018, pp. 1–6.
[28]
Z. Wanget al., “Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 379–387.
[29]
R. Chu, Y. Sun, Y. Li, Z. Liu, C. Zhang, and Y. Wei, “Vehicle re-identification with viewpoint-aware metric learning,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 8281–8290.
[30]
W.-H. Lin and D. Tong, “Vehicle re-identification with dynamic time windows for vehicle passage time estimation,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 4, pp. 1057–1063, Dec. 2011.
[31]
J. Qian, W. Jiang, H. Luo, and H. Yu, “Stripe-based and attribute-aware network: A two-branch deep model for vehicle re-identification,” Meas. Sci. Technol., vol. 31, no. 9, Sep. 2020, Art. no.
[32]
Y. Lou, Y. Bai, J. Liu, S. Wang, and L. Duan, “VERI-Wild: A large dataset and a new method for vehicle re-identification in the wild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 3230–3238.
[33]
W. Lin, Y. Li, X. Yang, P. Peng, and J. Xing, “Multi-view learning for vehicle re-identification,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2019, pp. 832–837.
[34]
T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear CNN models for fine-grained visual recognition,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1449–1457.
[35]
X. Ma, K. Zhu, H. Guo, J. Wang, M. Huang, and Q. Miao, “Vehicle re-identification with refined part model,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), Jul. 2019, pp. 603–606.
[36]
A. Kanacı, X. Zhu, and S. Gong, “Vehicle reidentification by fine-grained cross-level deep learning,” in Proc. Brit. Mach. Vis. Conf. (BMVC) AMMDS Workshop, vol. 2, 2017, pp. 772–788.
[37]
H. Guo, K. Zhu, M. Tang, and J. Wang, “Two-level attention network with multi-grain ranking loss for vehicle re-identification,” IEEE Trans. Image Process., vol. 28, no. 9, pp. 4328–4338, Sep. 2019.
[38]
Y. Zhang, D. Liu, and Z.-J. Zha, “Improving triplet-wise training of convolutional neural network for vehicle re-identification,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2017, pp. 1386–1391.
[39]
H. Liu, Y. Tian, Y. Wang, L. Pang, and T. Huang, “Deep relative distance learning: Tell the difference between similar vehicles,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 2167–2175.
[40]
Z. Zheng, L. Zheng, and Y. Yang, “A discriminatively learned CNN embedding for person reidentification,” ACM Trans. Multimedia Comput., Commun., Appl., vol. 14, no. 1, pp. 1–20, Feb. 2018.
[41]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248–255.
[42]
Y. Bai, J. Liu, Y. Lou, C. Wang, and L.-Y. Duan, “Disentangled feature learning network and a comprehensive benchmark for vehicle re-identification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 6854–6871, Oct. 2022.
[43]
W. Sun, G. Dai, X. Zhang, X. He, and X. Chen, “TBE-Net: A three-branch embedding network with part-aware ability and feature complementary learning for vehicle re-identification,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 9, pp. 14557–14569, Sep. 2022.
[44]
Z. Wang, F. Zhu, S. Tang, R. Zhao, L. He, and J. Song, “Feature erasing and diffusion network for occluded person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 4744–4753.
[45]
W. Qian, Z. He, C. Chen, and S. Peng, “Partner learning: A comprehensive knowledge transfer for vehicle re-identification,” Neurocomputing, vol. 480, pp. 89–98, Apr. 2022.
[46]
F. Shen, J. Zhu, X. Zhu, Y. Xie, and J. Huang, “Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 8793–8804, Jul. 2022.
[47]
B. Xie, X. Wu, S. Zhang, S. Zhao, and M. Li, “Learning diverse features with part-level resolution for person re-identification,” in Proc. Chin. Conf. Pattern Recognit. Comput. Vis. (PRCV), Oct. 2020, pp. 16–28.
[48]
R. Quispe and H. Pedrini, “Top-DB-Net: Top DropBlock for activation enhancement in person re-identification,” in Proc. 25th Int. Conf. Pattern Recognit. (ICPR), Jan. 2021, pp. 2980–2987.
[49]
S. Zagoruyko and N. Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer,” in Proc. Int. Conf. Learn. Represent. (ICLR), Jun. 2017, pp. 1–13.
[50]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn. (ICML), 2015, pp. 448–456.
[51]
X. Liu, W. Liu, T. Mei, and H. Ma, “A deep learning-based approach to progressive vehicle re-identification for urban surveillance,” in Proc. Eur. Conf. Comput. Vis. (ECCV), Sep. 2016, pp. 869–884.
[52]
L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1116–1124.
[53]
E. Ristani, F. Solera, R. S. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in Proc. Eur. Conf. Comput. Vis. Workshops (ECCVW), vol. 9914, 2016, pp. 17–35.
[54]
L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer GAN to bridge domain gap for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 79–88.
[55]
Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing data augmentation,” in Proc. AAAI Conf. Artif. Intell. (AAAI), 2020, vol. 34, no. 7, pp. 13001–13008.
[56]
X. Zhu, Z. Luo, P. Fu, and X. Ji, “VOC-RelD: Vehicle re-identification based on vehicle-orientation-camera,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2020, pp. 2566–2573.
[57]
T. He, Z. Zhang, H. Zhang, Z. Zhang, J. Xie, and M. Li, “Bag of tricks for image classification with convolutional neural networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 558–567.
[58]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2015, pp. 1–15.
[59]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017.
[60]
Z. Sun, X. Nie, X. Xi, and Y. Yin, “CFVMNet: A multi-branch network for vehicle re-identification based on common field of view,” in Proc. 28th ACM Int. Conf. Multimedia (MM), Oct. 2020, pp. 3523–3531.
[61]
Y. Jin, C. Li, Y. Li, P. Peng, and G. A. Giannopoulos, “Model latent views with multi-center metric learning for vehicle re-identification,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 3, pp. 1919–1931, Mar. 2021.
[62]
Z. Zheng, T. Ruan, Y. Wei, Y. Yang, and T. Mei, “VehicleNet: Learning robust visual representation for vehicle re-identification,” IEEE Trans. Multimedia, vol. 23, pp. 2683–2693, 2021.
[63]
D. Shen, S. Zhao, J. Hu, H. Feng, D. Cai, and X. He, “ES-Net: Erasing salient parts to learn more in re-identification,” IEEE Trans. Image Process., vol. 30, pp. 1676–1686, 2021.
[64]
S. He, H. Luo, P. Wang, F. Wang, H. Li, and W. Jiang, “TransReID: Transformer-based object re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 14993–15002.
[65]
X. Chenet al., “Salience-guided cascaded suppression network for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 3297–3307.
[66]
H. Wang, J. Shen, Y. Liu, Y. Gao, and E. Gavves, “NFormer: Robust person re-identification with neighbor transformer,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 7287–7297.

Index Terms

  1. Multi-Branch Enhanced Discriminative Network for Vehicle Re-Identification
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          Publisher

          IEEE Press

          Publication History

          Published: 17 October 2023

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 04 Oct 2024

          Other Metrics

          Citations

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media