Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3552464.3555684acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Deep Level Annotation for Painter Attribution on Greek Vases utilizing Object Detection

Published: 10 October 2022 Publication History

Abstract

Painter attribution is based on a variety of factors, oftentimes deeply buried in the details such as the brushstrokes of the ears or the eyes, which a painter might paint in a specific way. To get to this details, the images have to be examined carefully and intensively. Our work is focused on this phenomenon of painter attribution, investigating those details using supervised machine learning methods for image recognition that rely on a set representation. In this paper however, we are going to focus on one step of our work specifically: The annotation process. With such a focus on details, a dense and detailed, but also transparent annotation of the images is necessary. On one hand this is essential for our research, on the other hand however, it is very time consuming and requires a lot of human resources. Therefore we developed an ontology for the annotation of the images and a semi-automated workflow with object detection component using YOLOv3 and closely tied to our ontology. This way we were able to automate our processes as efficiently as possible while maintaining the complexity of our annotations.

References

[1]
John D. Beazley. 1963. Attic Red-Figure Vase-Painters, I-III.
[2]
Torsten Bendschus and Prathmesh Madhu. 2020. Deep Learning Based Attribute Representation in Ancient Vase Paintings. In DH2020.
[3]
Thorsten Bendschus and Lara Mührenberg. 2017. Iconographics. Computational Understanding of Iconography and Narration in Visual Cultural Heritage. In Vom Untergrund ins Internet. Die Institute fur Ur- und Fruhgeschichte, Klassische Archaologie und Christliche Archaologie mit dem gemeinsamen Studiengang Archaologische Wissenschaften an der Friedrich-Alexander-Universitat Erlangen-Nurnberg -- Stand 2018/19. 72--78.
[4]
Adolf H. Borbein. 2014. Connoisseurship. In The Oxford Handbook of Greek and Roman Art and Architecture.
[5]
University of Oxford Classical Art Research Center. 2022. Beazley Archive Pottery Database. https://www.beazley.ox.ac.uk/carc/pottery Accessed: June 26, 2022.
[6]
Ceren Cömert, Murat Özbayoug lu, and Cocs ku Kasnakoug lu. 2021. Painter Prediction from Artworks with Transfer Learning. In 2021 7th International Conference on Mechatronics and Robotics Engineering (ICMRE). IEEE, 204--208.
[7]
Intel Corporation. 2022 [Online]. CVAT - Computer Vision Annotation Tool. https://github.com/openvinotoolkit/cvat/.
[8]
Elliot J. Crowley and Andrew Zisserman. 2013. Of gods and goats: Weakly supervised learning of figurative art. learning, Vol. 8 (2013), 14.
[9]
Bayerische Akademie der Wissenschaften. 2022. Corpus Vasorum Antiquorum. https://www.cvaonline.org/cva/
[10]
Eric Driscoll. 2019. Beazley's Connoisseurship: Aesthetics. Natural History, and Artistic Development. Metis, Vol. 17 (2019), 101--120.
[11]
Abhishe Dutta, Ankush Gupta, and Andrew Zisserman. 2016. VGG Image Annotator (VIA). http://www.robots.ox.ac.uk/ vgg/software/via/. Version: 2.0.10, Accessed: June 26, 2022.
[12]
Abhishek Dutta and Andrew Zisserman. 2019. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the 27th ACM International Conference on Multimedia (Nice, France) (MM '19). Association for Computing Machinery, New York, NY, USA, 2276--2279. https://doi.org/10.1145/3343031.3350535
[13]
Ahmed Elgammal, Yan Kang, and Milko Den Leeuw. 2018. Picasso, Matisse, or a Fake? Automated Analysis of Drawings at the Stroke Level for Attribution and Authentication. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32 (2018). https://ojs.aaai.org/index.php/AAAI/article/view/11313
[14]
IZdigital Friedrich-Alexander-Universität Erlangen-Nürnberg. 2020. ANNOtator Challenge -- Develop a game to gamify the annotating. https://www.izdigital.fau.de/annotator-challenge/ Accessed: June 26, 2022.
[15]
Norbert Eschbach and Stefan Schmidt. 2016. Töpfer Maler Werkstatt: Zuschreibungen in der griechischen Vasenmalerei und die Organisation antiker Keramikproduktion. CH Beck.
[16]
Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. 2010. Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, Vol. 32, 9 (2010), 1627--1645.
[17]
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, and Jian Sun. 2021. Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430 (2021).
[18]
Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision. 1440--1448.
[19]
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 580--587.
[20]
Daniel Graepler. 2016. Künstlerhand und Kennerauge. Die Zuschreibung als archäologisches Methodenproblem. In Töpfer, Maler, Werkstatt. Zuschreibungen in der griechischen Vasenmalerei und die Organisation antiker Keramikproduktion (München). 14--24.
[21]
Marielet Guillermo, Robert Kerwin Billones, Argel Bandala, Ryan Rhay Vicerra, Edwin Sybingco, Elmer P Dadios, and Alexis Fillone. 2020. Implementation of Automated Annotation through Mask RCNN Object Detection model in CVAT using AWS EC2 Instance. In 2020 IEEE region 10 conference (TENCON). IEEE, 708--713.
[22]
Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision. 2961--2969.
[23]
Jeffrey M Hurwit. 2015. Artists and signatures in ancient Greece. Cambridge University Press.
[24]
The Getty Research Institute. 2022. Getty Vocabularies. https://www.getty.edu/research/tools/vocabularies/
[25]
Anyla Kabashi. 2019. ICONCLASS-classification system for art and iconography. Ph.,D. Dissertation. University of Zagreb. University of Zagreb, Faculty of Humanities and Social ?.
[26]
Mehmet Oguz Kelek, Nurullah Calik, and Tulay Yildirim. 2019. Painter classification over the novel art painting data set via the latest deep neural networks. Procedia Computer Science, Vol. 154 (2019), 369--376.
[27]
Kerameikos. 2021. Ontology. http://kerameikos.org/ontology/211117
[28]
Yusuke Komiyama. 2022 [Online]. OpenIAV (Image Annotation Viewer). https://github.com/Yusuke-KOMIYAMA/aiv.
[29]
V7 Lab. 2022 [Online]. V7 AI Data Platform for Computer Vision. https://www.v7labs.com/annotation.
[30]
Alison Langmead, Christopher J Nygren, Paul Rodriguez, and Alan Craig. 2021. Leonardo, Morelli, and the Computational Mirror. DHQ: Digital Humanities Quarterly, Vol. 15, 1 (2021).
[31]
Mubashiru Olarewaju Lawal. 2021. Tomato detection based on modified YOLOv3 framework. Scientific Reports, Vol. 11, 1 (2021), 1--11.
[32]
Stefan Lengauer, Alexander Komar, Stephan Karl, Elisabeth Trinkl, Ivan Sipiran, Tobias Schreck, and Reinhold Preiner. 2020. Semi-automated Annotation of Repetitive Ornaments on 3D Painted Pottery Surfaces. In Eurographics Workshop on Graphics and Cultural Heritage. Eurographics Assoc.
[33]
Stefan Lengauer, Alexander Komar, Arniel Labrada, Stephan Karl, Elisabeth Trinkl, Reinhold Preiner, Benjamin Bustos, and Tobias Schreck. 2019. Motif-driven Retrieval of Greek Painted Pottery. In Eurographics Workshop on Graphics and Cultural Heritage. Eurographics-European Association for Computer Graphics, 91--98.
[34]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, 740--755.
[35]
Mohamed Loey, Gunasekaran Manogaran, Mohamed Hamed N Taha, and Nour Eldeen M Khalifa. 2021. Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection. Sustainable cities and society, Vol. 65 (2021), 102600.
[36]
Daiqian Ma, Feng Gao, Yan Bai, Yihang Lou, Shiqi Wang, Tiejun Huang, and Ling-Yu Duan. 2017. From part to whole: who is behind the painting?. In Proceedings of the 25th ACM international conference on Multimedia. 1174--1182.
[37]
Prathmesh Madhu, Angel Villar-Corrales, Ronak Kosti, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Andreas Maier, and Vincent Christlein. 2020. Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning. https://doi.org/10.48550/ARXIV.2012.05616
[38]
Christian Marzahl, Marc Aubreville, Christof A Bertram, Jennifer Maier, Christian Bergler, Christine Kröger, Jörn Voigt, Katharina Breininger, Robert Klopfleisch, and Andreas Maier. 2021. EXACT: a collaboration toolset for algorithm-aided annotation of images with annotation version control. Scientific reports, Vol. 11, 1 (2021), 1--11.
[39]
Thomas Mensink and Jan Van Gemert. 2014. The rijksmuseum challenge: Museum-centered visual recognition. In Proceedings of International Conference on Multimedia Retrieval. 451--454.
[40]
Commercial Software Engineering Microsoft. 2022 [Online]. Visual Object Tagging Tool. https://github.com/microsoft/VoTT.
[41]
Richard Neer. 1997. Beazley and the Language of Connoisseurship. Hephaistos, Vol. 15 (1997), 7--30.
[42]
Richard Neer. 2005. Connoisseurship and the Stakes of Style. Critical Inquiry, Vol. 32, 1 (2005), 1--26.
[43]
Alexander Neubeck and Luc Van Gool. 2006. Efficient non-maximum suppression. In 18th International Conference on Pattern Recognition (ICPR'06), Vol. 3. IEEE, 850--855.
[44]
John Oakley. 2014. Athenian Potters and Painters III. Vol. 3. Oxbow books.
[45]
J. Michael Padgett. 2017. The Berlin Painter and His World.
[46]
Maria Papadopoulou. 2020. Tedi Onto-Dictionary on "Ontoterminology of Vases" (en). http://ontoterminology.com/wp-content/uploads/2020/11/krater.en_.html
[47]
Diana Rodr'iguez Pérez. 2018. Sir John Beazley's Notebooks: A New Resource for the Study of Athenian Figure-Decorated Pottery. Hesperia: The Journal of the American School of Classical Studies at Athens, Vol. 87, 4 (2018), 743--809.
[48]
Seth D Pevnick. 2010. ΣΥPIΣKOΣ EΓP¶hi$ΣEN: Loaded Names, Artistic Identity, and Reading an Athenian Vase. Classical Antiquity, Vol. 29, 2 (2010), 222--253.
[49]
Fred Phillips and Brandy Mackintosh. 2011. Wiki Art Gallery, Inc.: A case for critical thinking. Issues in Accounting Education, Vol. 26, 3 (2011), 593--608.
[50]
PyLabelMe. 2022 [Online]. A Simple Image Annotation. https://github.com/mpitid/pylabelme.
[51]
Xuebin Qin, Shida He, Zichen Zhang, Masood Dehghan, and Martin Jagersand. 2018. Bylabel: A boundary based semi-automatic image annotation tool. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1804--1813.
[52]
Alex H Ramos, Lee Lichtenstein, Manaswi Gupta, Michael S Lawrence, Trevor J Pugh, Gordon Saksena, Matthew Meyerson, and Gad Getz. 2015. Oncotator: cancer variant annotation tool. Human mutation, Vol. 36, 4 (2015), E2423--E2429.
[53]
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779--788.
[54]
Joseph Redmon and Ali Farhadi. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018).
[55]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, Vol. 28 (2015).
[56]
Christoph Reusser and Martin Bürge. 2018. Exekias hat mich gemalt und getöpfert: Ausstellung in der Arch"aologischen Sammlung der Universit"at Zürich, 9.11. 2018--31.3. 2019. Arch"aologische Sammlung der Universit"at Zürich.
[57]
Martin Robertson. 1989. Beazley's Use of Terms. Oxford University Press, USA.
[58]
Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. 2004. " GrabCut" interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG), Vol. 23, 3 (2004), 309--314.
[59]
Philippe Rouet. 2001. Approaches to the study of Attic vases: Beazley and Pottier. Oxford Monographs on Classical Archaeology.
[60]
Bryan C Russell, Antonio Torralba, Kevin P Murphy, and William T Freeman. 2008. LabelMe: a database and web-based tool for image annotation. International journal of computer vision, Vol. 77, 1 (2008), 157--173.
[61]
Adrian John Ryan. 2009. Computer aided techniques for the attribution of Attic black-figure vase-paintings using the Princeton painter as a model. Ph.,D. Dissertation.
[62]
Inc. Scale AI. 2022 [Online]. Scale AI: The Data Platform for AI. https://scale.com/.
[63]
Stefan Schmidt. 2016. Zuschreibungen in der griechischen Vasenmalerei und die Organisation antiker Keramikproduktion: Geschichte und Perspektiven der Forschung. (2016), 7--13.
[64]
Xi Shen, Alexei A Efros, and Mathieu Aubry. 2019. Discovering visual patterns in art collections with spatially-consistent feature learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9278--9287.
[65]
Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Massimiliano Corsini, and Rita Cucchiara. 2019. Artpedia: A new visual-semantic dataset with visual and contextual sentences in the artistic domain. In International Conference on Image Analysis and Processing. Springer, 729--740.
[66]
Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, et al. 2021. Sparse r-cnn: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14454--14463.
[67]
Antonio Torralba, Bryan C Russell, and Jenny Yuen. 2010. Labelme: Online image annotation and applications. Proc. IEEE, Vol. 98, 8 (2010), 1467--1484.
[68]
Tzutalin. 2015 [Online]. LabelImg - A graphical image annotation tool. https://github.com/tzutalin/labelImg.
[69]
Kentaro Wada. 2022. Labelme: Image Polygonal Annotation with Python. https://doi.org/10.5281/zenodo.5711226
[70]
James Whitley. 1997. Beazley as theorist. Antiquity, Vol. 71, 271 (1997), 40--47.
[71]
Richard Wollheim. 1973. Giovanni Morelli and the origins of scientific connoisseurship. Allen Lane.
[72]
Yu-Shiang Wong, Hung-Kuo Chu, and Niloy J Mitra. 2015. Smartannotator an interactive tool for annotating indoor rgbd images. In Computer Graphics Forum, Vol. 34. Wiley Online Library, 447--457.
[73]
Jimin Yu and Wei Zhang. 2021. Face mask wearing detection algorithm based on improved YOLO-v4. Sensors, Vol. 21, 9 (2021), 3263.
[74]
Inc © Labelbox. 2022 [Online]. Labelbox: the leading training data platform for production AI. https://labelbox.com/.endthebiblio

Cited By

View all
  • (2024)Hand Gesture Recognition in Buddhist Art Images: Evaluation of a Keypoint-based ApproachProceedings of the 6th workshop on the analySis, Understanding and proMotion of heritAge Contents10.1145/3689094.3689464(34-40)Online publication date: 28-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SUMAC '22: Proceedings of the 4th ACM International workshop on Structuring and Understanding of Multimedia heritAge Contents
October 2022
55 pages
ISBN:9781450394949
DOI:10.1145/3552464
  • General Chairs:
  • Valerie Gouet-Brunet,
  • Ronak Kosti,
  • Li Weng
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. datasets
  2. deep learning
  3. neural networks
  4. object detection

Qualifiers

  • Research-article

Funding Sources

Conference

MM '22
Sponsor:

Acceptance Rates

SUMAC '22 Paper Acceptance Rate 5 of 6 submissions, 83%;
Overall Acceptance Rate 5 of 6 submissions, 83%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)45
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Hand Gesture Recognition in Buddhist Art Images: Evaluation of a Keypoint-based ApproachProceedings of the 6th workshop on the analySis, Understanding and proMotion of heritAge Contents10.1145/3689094.3689464(34-40)Online publication date: 28-Oct-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media