Abstract
The sketch2image translation of shoes innovatively generates natural shoe images that match the content and style of the input sketches, which is of important application value in the field of apparel e-commerce and auxiliary design. Training the translation model requires sketches and real images of shoes, however, the existing shoes datasets face the problems of small scale and unclear classification, which hinders the training of translation models. In this paper, we propose a benchmark for sketch2image translation of shoes called ShoeMaster, in the hope to facilitate the research of sketch2image translation of shoes, and provide a technical benchmark for fairly evaluate the progress of corresponding research works. ShoeMaster provides a large-scale daily shoe dataset covering all the categories of our proposed shoes classification knowledge hierarchy, which contains more than 50,000 real shoe images and corresponding sketches of different drawing styles. In order to comprehensively evaluate the quality of generated images, we propose the evaluation metrics from both qualitative and quantitative aspects. Based on ShoeMaster, we conduct comparison experiments on three state-of-the-art sketch2image models, the experiment results and analysis demonstrate the effectiveness of ShoeMaster. The ShowMaster benchmark including dataset, the metric questionnaire and calculating source code will be released at https://github.com/202oranger/ShoeMaster.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
FARFETCH. https://www.farfetch.cn/. Accessed 4 Oct 2022
FashionAI dataset. http://fashionai.alibaba.com/datasets/. Accessed 4 Oct 2022
Multilingual classification and named of cross-border E-commerce trading products-Footwear. https://openstd.samr.gov.cn/bzgk/gb/newGbInfo?hcno=D5D292658B6924EEA1E0D82313DB4EAC. Accessed 4 Oct 2022
Questionnaire stars. https://www.wjx.cn/. Accessed 4 Oct 2022
XnConvert. https://www.xnview.com/en/xnconvert/. Accessed 4 Oct 2022
Chen, S.: Brief talk of classification and nomenclature of footware products, vol. 12, pp. 100–102 (2016)
Chen, T., Cheng, M.M., Tan, P., Shamir, A., Hu, S.M.: Sketch2photo: internet image montage. ACM Trans. Graph. (TOG) 28(5), 1–10 (2009)
Chen, W., Hays, J.: SketchyGAN: towards diverse and realistic sketch to image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9416–9425. IEEE, Los Alamitos (2018)
Eitz, M., Richter, R., Hildebrand, K., Boubekeur, T., Alexa, M.: Photosketcher: interactive sketch-based image synthesis. IEEE Comput. Graph. Appl. 31(6), 56–66 (2011)
Ge, Y., Zhang, R., Wang, X., Tang, X., Luo, P.: DeepFashion2: a versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5337–5345. IEEE, Los Alamitos (2019)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 179–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_11
Ian, J.G., et al.: Generative adversarial nets. In: Proceedings of Annual Conference on Neural Information Processing Systems, pp. 2672–2680 (2014)
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134. IEEE, Los Alamitos (2017)
Kim, J., Kim, M., Kang, H., Lee, K.: U-GAT-IT: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv e-prints, p. arXiv-1907 (2019)
Li, M., Lin, Z., Mech, R., Yumer, E., Ramanan, D.: Photo-sketching: inferring contour drawings from images. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1403–1412. IEEE, Los Alamitos (2019)
Li, Y., Fang, C., Hertzmann, A., Shechtman, E., Yang, M.H.: Im2pencil: controllable pencil illustration from photographs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1525–1534. IEEE, Los Alamitos (2019)
Liu, B., Zhu, Y., Song, K., Elgammal, A.: Self-supervised sketch-to-image synthesis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 2073–2081. AAAI Press, Menlo Park (2021)
Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 700–708 (2017)
Liu, R., Yu, Q., Yu, S.: An unpaired sketch-to-photo translation model. arXiv preprint arXiv:1909.08313, vol. 1, no. 3, p. 6 (2019)
Liu, Y., Qin, Z., Wan, T., Luo, Z.: Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks. Neurocomputing 311, 78–87 (2018)
Liu, Z., Luo, P., Qiu, S., Wang, X., Tang, X.: DeepFashion: powering robust clothes recognition and retrieval with rich annotations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1096–1104. IEEE, Los Alamitos (2016)
Qiu, H., Wang, C., Zhu, H., Zhu, X., Gu, J., Han, X.: Two-phase hair image synthesis by self-enhancing generative model. In: Computer Graphics Forum, vol. 38, pp. 403–412. Wiley Online Library (2019)
Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409. IEEE, Los Alamitos (2017)
Shu-Yu, C., Wanchao, S., Lin, G., Shihong, X., Hongbo, F.: DeepFaceDrawing: deep generation of face images from sketches. ACM Trans. Graph. 39(4), 72 (2020)
Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: 2003 the Thrity-Seventh Asilomar Conference on Signals. Systems & Computers, vol. 2, pp. 1398–1402. IEEE, Los Alamitos (2003)
Yu, Q., Liu, F., Song, Y.Z., Xiang, T., Hospedales, T.M., Loy, C.C.: Sketch me that shoe. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 799–807. IEEE, Los Alamitos (2016)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595. IEEE, Los Alamitos (2018)
Zheng, S., Yang, F., Kiapour, M.H., Piramuthu, R.: ModaNet: a large-scale street fashion dataset with polygon annotations. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 1670–1678. ACM, New York (2018)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232. IEEE, Los Alamitos (2017)
Acknowledgement
This research was supported by the grants from the 2022 Undergraduate Training Program for Innovation and Entrepreneurship of Beijing Institute of Fashion Technology (No. 20223060321), the 2022 Postgraduate Education Quality Improvement Special Project and General Teaching Reform Project Funding of Beijing Institute of Fashion Technology (No. 120301990132), the Natural Science Foundation of China (No. 62062058), the General Program of Science and Technology Development Project of Beijing Municipal Education Commission (No. KM202210012002), the Graduate Education Quality Improvement Project of Beijing Institute of Fashion Technology (No. NHFZ20220206).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, S., Shi, Y., Feng, T., Yuan, H. (2023). ShoeMaster: A Benchmark for Sketch2Image Translation of Shoes. In: Gainaru, A., Zhang, C., Luo, C. (eds) Benchmarking, Measuring, and Optimizing. Bench 2022. Lecture Notes in Computer Science, vol 13852. Springer, Cham. https://doi.org/10.1007/978-3-031-31180-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-31180-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31179-6
Online ISBN: 978-3-031-31180-2
eBook Packages: Computer ScienceComputer Science (R0)