Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3660570.3660571acmotherconferencesArticle/Chapter ViewAbstractPublication PagesdlfmConference Proceedingsconference-collections
short-paper

An Online Tool for Semi-Automatically Annotating Music Scores for Optical Music Recognition

Published: 27 June 2024 Publication History

Abstract

The paper describes an online tool, OMRAT, for semi-automatic annotation of music scores for Optical Music Recognition (OMR) systems. OMRAT uses deep neural networks, machine learning, and music notation ontologies at different stages to respectively detect musical objects, establish relationships between them, and convert them into a machine-readable format MEI. A human editor verifies the output of the recognition stage to correct potential errors and remove incorrect labels as needed. The tool can create training/testing datasets for OMR systems and may be used for notation editors or audio synthesizers.

References

[1]
Julia Adamska, Mateusz Piecuch, Mateusz Podgórski, Piotr Walkiewicz, and Ewa Łukasik. 2015. Mobile System for Optical Music Recognition and Music Sound Generation. In Computer Information Systems and Industrial Management - 14th IFIP TC 8 International Conference, CISIM 2015, Warsaw, Poland, September 24-26, 2015. Proceedings(Lecture Notes in Computer Science, Vol. 9339), Khalid Saeed and Władysław Homenda (Eds.). Springer, 571–582. https://doi.org/10.1007/978-3-319-24369-6_48
[2]
Jorge Calvo-Zaragoza, Jan Hajič Jr, and Alexander Pacha. 2021. Understanding Optical Music Recognition. ACM Comput. Surv. 53, 4 (2021), 77:1–77:35. https://doi.org/10.1145/3397499
[3]
Jorge Calvo-Zaragoza and David Rizo. 2018. Camera-PrIMuS: Neural End-to-End Optical Music Recognition on Realistic Monophonic Scores. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France, September 23-27, 2018, Emilia Gómez, Xiao Hu, Eric Humphrey, and Emmanouil Benetos (Eds.). 248–255. http://ismir2018.ircam.fr/doc/pdfs/33_Paper.pdf
[4]
Jorge Calvo-Zaragoza, Alejandro H. Toselli, and Enrique Vidal. 2019. Handwritten Music Recognition for Mensural Notation with Convolutional Recurrent Neural Networks. Pattern Recognit. Lett. 128 (2019), 115–121. https://doi.org/10.1016/J.PATREC.2019.08.021
[5]
Francisco J. Castellanos, Jorge Calvo-Zaragoza, and José M. Iñesta. 2020. A Neural Approach for Full-Page Optical Music Recognition of Mensural Documents. In Proceedings of the 21th International Society for Music Information Retrieval Conference, ISMIR 2020, Montreal, Canada, October 11-16, 2020, Julie Cumming, Jin Ha Lee, Brian McFee, Markus Schedl, Johanna Devaney, Cory McKay, Eva Zangerle, and Timothy de Reuse (Eds.). 558–565. http://archives.ismir.net/ismir2020/paper/000207.pdf
[6]
Ichiro Fujinaga. 1997. Adaptive optical music recognition. Ph. D. Dissertation. CAN. Advisor(s) Pennycook, Bruce.
[7]
Carlos Garrido-Munoz, Antonio Ríos-Vila, and Jorge Calvo-Zaragoza. 2022. A Holistic Approach for Image-to-Graph: Application to Optical Music Recognition. Int. J. Document Anal. Recognit. 25, 4 (2022), 293–303. https://doi.org/10.1007/S10032-022-00417-4
[8]
Jan Hajič Jr and Pavel Pecina. 2017. The MUSCIMA++ Dataset for Handwritten Optical Music Recognition. In 14th IAPR International Conference on Document Analysis and Recognition, ICDAR 2017, Kyoto, Japan, November 9-15, 2017. IEEE, 39–46. https://doi.org/10.1109/ICDAR.2017.16
[9]
Andrew Noah Hankinson. 2014. Optical music recognition infrastructure for large-scale music document analysis. McGill University (Canada).
[10]
Władysław Homenda. 1996. Automatic Recognition of Printed Music and its Conversion into Playable Music Data. Control and Cybernetics 25, 2 (1996), 353–367.
[11]
Alexander Pacha, Jorge Calvo-Zaragoza, and Jan Hajič Jr. 2019. Learning Notation Graph Construction for Full-Pipeline Optical Music Recognition. In Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR 2019, Delft, The Netherlands, November 4-8, 2019, Arthur Flexer, Geoffroy Peeters, Julián Urbano, and Anja Volk (Eds.). 75–82. http://archives.ismir.net/ismir2019/paper/000006.pdf
[12]
Alexander Pacha, Jan Hajič Jr, and Jorge Calvo-Zaragoza. 2018. A Baseline for General Music Object Detection with Deep Learning. Applied Sciences 8, 9 (2018). https://doi.org/10.3390/app8091488
[13]
Carlos Peñarrubia, Carlos Garrido-Munoz, Jose J. Valero-Mas, and Jorge Calvo-Zaragoza. 2023. Efficient Notation Assembly in Optical Music Recognition. In Proceedings of the 24th International Society for Music Information Retrieval Conference, ISMIR 2023, Milan, Italy, November 5-9, 2023, Augusto Sarti, Fabio Antonacci, Mark Sandler, Paolo Bestagini, Simon Dixon, Beici Liang, Gaël Richard, and Johan Pauwels (Eds.). 182–189. https://doi.org/10.5281/ZENODO.10265253
[14]
Ana Rebelo, Ichiro Fujinaga, Filipe Paszkiewicz, André R. S. Marçal, Carlos Guedes, and Jaime S. Cardoso. 2012. Optical Music Recognition: State-Of-The-Art and Open Issues. Int. J. Multim. Inf. Retr. 1, 3 (2012), 173–190. https://doi.org/10.1007/S13735-012-0004-6
[15]
Antonio Ríos-Vila, Jorge Calvo-Zaragoza, and Thierry Paquet. 2024. Sheet Music Transformer: End-To-End Optical Music Recognition Beyond Monophonic Transcription. CoRR abs/2402.07596 (2024). https://doi.org/10.48550/ARXIV.2402.07596 arXiv:2402.07596
[16]
Antonio Ríos-Vila, David Rizo, José M. Iñesta, and Jorge Calvo-Zaragoza. 2023. End-To-End Optical Music Recognition for Pianoform Sheet Music. Int. J. Document Anal. Recognit. 26, 3 (2023), 347–362. https://doi.org/10.1007/S10032-023-00432-Z
[17]
David Rizo, Jorge Calvo-Zaragoza, and José M. Iñesta. 2018. MuRET: a Music Recognition, Encoding, and Transcription Tool. In Proceedings of the 5th International Conference on Digital Libraries for Musicology, DLfM 2018, Paris, France, September 28, 2018, Kevin R. Page (Ed.). ACM, 52–56. https://doi.org/10.1145/3273024.3273029
[18]
Kaihua Tang. 2020. A Scene Graph Generation Codebase in PyTorch. https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch.
[19]
Lukas Tuggener, Ismail Elezi, Jürgen Schmidhuber, Marcello Pelillo, and Thilo Stadelmann. 2018. DeepScores-A Dataset for Segmentation, Detection and Classification of Tiny Objects. (2018), 3704–3709. https://doi.org/10.1109/ICPR.2018.8545307
[20]
Lukas Tuggener, Raphael Emberger, Adhiraj Ghosh, Pascal Sager, Yvan Putra Satyawan, Javier A. Montoya-Zegarra, Simon Goldschagg, Florian Seibold, Urs Gut, Philipp Ackermann, Jürgen Schmidhuber, and Thilo Stadelmann. 2024. Real World Music Object Recognition. Trans. Int. Soc. Music. Inf. Retr. 7, 1 (2024), 1–14. https://doi.org/10.5334/TISMIR.157

Index Terms

  1. An Online Tool for Semi-Automatically Annotating Music Scores for Optical Music Recognition

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    DLfM '24: Proceedings of the 11th International Conference on Digital Libraries for Musicology
    June 2024
    83 pages
    ISBN:9798400717208
    DOI:10.1145/3660570
    • Editor:
    • David M. Weigl
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. MEI
    2. OMR
    3. Optical Music Recognition
    4. deep neural networks
    5. music encoding
    6. musical documents annotation
    7. ontology

    Qualifiers

    • Short-paper
    • Research
    • Refereed limited

    Funding Sources

    • Smart Growth Operational Programme 2014-2020 in the area ?Development of modern research infrastructure of the science sector?

    Conference

    DLfM 2024

    Acceptance Rates

    Overall Acceptance Rate 27 of 48 submissions, 56%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 14
      Total Downloads
    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 10 Nov 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media