Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Tiziana Possemato

    Tiziana Possemato

    The proposed model can be seen as a sort of ‘Maturity model’. The list starts with an entry level item, defining a more traditional scenario, ending with a high level item, already in a future technological scenario (but already under... more
    The proposed model can be seen as a sort of ‘Maturity model’. The list starts with an entry level item, defining a more traditional scenario, ending with a high level item, already in a future technological scenario (but already under construction). For each item, the Library should assign a score, used to evaluate the level of adhesion and of reception of the BIBFRAME specifications by vendors. This means that each library, depending on its nature, its mission, its type of user and so on, can choose which specific component to define as 'important' and can assign to them (or to it!) a score. The document proposes a sort of “building blocks”, that can assume different values in relation to the specific library exigencies.
    Libraries’ shift to the semantic web has been underway for a number of years. Mellon funded projects such as Linked Data for Production (LD4P) [1] or the BIBFRAME European Workshop 2018 in Florence [2] show the commitment of national,... more
    Libraries’ shift to the semantic web has been underway for a number of years. Mellon funded projects such as Linked Data for Production (LD4P) [1] or the BIBFRAME European Workshop 2018 in Florence [2] show the commitment of national, public, and academic libraries, as well as vendors, to this transition. Libraries worldwide, however, are enmeshed in hundreds of millions of metadata records communicated through flat files (the MARC formats) [3]. The shift to linked data will require the conversion of these flat files to a semantically expressive model such as the Resource Description Framework (RDF) [4]. The conversion of such large amounts of semantically inexpressive data to semantically rich data will require automated enhancements in the conversion process. Data hidden within the flat files, such as role (author, illustrator, composer, etc.), can greatly aid with the reconciliation of entities within those files. Authify is one of the first tools available to libraries to both c...
    The PCC Task Group Report will report on the work from the inception of the TG to date. This will include a technical discussion around entification of MARC library data and comments on the status of existing infrastructure within the... more
    The PCC Task Group Report will report on the work from the inception of the TG to date. This will include a technical discussion around entification of MARC library data and comments on the status of existing infrastructure within the Libraries to support this work. Motivation: The PCC Task Group on URIs in MARC [1] established by the Steering Committee in September 2015 was charged to: (1) identify and investigate issues surrounding the use of identifiers in MARC records; (2) develop guidelines, policies, (3) formulate work plans to implement identifiers in $0 and other fields and/or subfields in ILSs and PCC?affiliated utilities; (4) in consultation with the MARC Advisory Committee, technologists, and other stakeholders, take appropriate measures ensuring library data will transition smoothly from MARC environment to a Web-based and linked open data platform or service. To that end, the Task Group conducted pilot tests throughout 2016, and compiled findings and comments that serve...
    La Biblioteca digitale del Museo Galileo e stata istituita nel 2004 per offrire un servizio di consultazione di opere rare e importanti per la comunita degli storici della scienza. L'analisi delle esigenze specifiche del Museo Galileo... more
    La Biblioteca digitale del Museo Galileo e stata istituita nel 2004 per offrire un servizio di consultazione di opere rare e importanti per la comunita degli storici della scienza. L'analisi delle esigenze specifiche del Museo Galileo ha successivamente mutato questo obiettivo, sottolineando l'esigenza di un progetto molto piu ambizioso. Questa nuova fase e stata finalizzata alla creazione di un sistema informativo adeguato per la raccolta e il miglioramento delle diverse tipologie di risorse digitali - immagini, testi, video, suoni, animazioni, ecc. In questo contesto si inserisce il progetto MINERV@, che realizza un dataset secondo i principi dei Linked Open Data dal titolo “Banca dati Museo Galileo: strumenti, libri, fotografie, documenti”. ALIADA e il framework open source di pre-lavorazione dei dati utilizzato nel progetto MINERV@, capace di automatizzare l’intero processo di creazione e pubblicazione di linked open data indipendentemente dal formato di origine dei dati...
    In questo lavoro tratteremo il tema delle entità intese come oggetti del mondo reale (real-world object) e di come questo concetto sia utilizzato nell’ambito dell’entity modeling, quel processo di identificazione e modellamento delle... more
    In questo lavoro tratteremo il tema delle entità intese come oggetti del mondo reale (real-world object) e di come questo concetto sia utilizzato nell’ambito dell’entity modeling, quel processo di identificazione e modellamento delle entità che tanta parte occupa nei progetti di conversione dei cataloghi in linked open data. L’aiuto alla comprensione di cosa questo concetto esprima nell’ambito dell’universo bibliografico ci viene dalla programmazione orientata agli oggetti, che introduce il concetto di modellamento e gestione di un “oggetto” definendone uno stato e un comportamento. Ma per modellare un oggetto è necessario identificarlo e questo processo deve avvenire, spesso, trattando quantità imponenti di dati, non necessariamente omogeneamente strutturati: l’Entity Resolution è questo insieme di processi macchina che cerca di risolvere le ambiguità date dalla disomogeneità delle descrizioni riferibili alla medesima entità. L’adozione di queste pratiche, in ambito bibliografico, ...
    These are the slides from the LIBER 2020 Workshop, LOD Publication for Libraries, as run by LIBER's Linked Open Data Working Group. The group is working from 2018-2020 to promote and harmonise the linked data publication of libraries.... more
    These are the slides from the LIBER 2020 Workshop, LOD Publication for Libraries, as run by LIBER's Linked Open Data Working Group. The group is working from 2018-2020 to promote and harmonise the linked data publication of libraries. Special interest is placed in making the library linked data semantically interoperable and on keeping abreast with other similar initiatives both within the library sector and without. This workshop has two parts. First, a trio of presentations showcasing various aspects of library linked open data. Second, the review of an advanced draft version of a set of best practices for releasing linked open library data. The Working Group conducted a survey on library LOD publishing in 2019 and developed best practices based on the results during the first half of 2020. Based on the discussion in the workshop, a finalized version of the best practices will be released in autumn. <strong>Abstracts</strong> <em><strong>Semantic Intero...
    The paper defines the linked data as a set of best practices that are used to publish data on the web using a machine; the technology (or mode of realization) of linked data is associated with the concept of the semantic web. It is the... more
    The paper defines the linked data as a set of best practices that are used to publish data on the web using a machine; the technology (or mode of realization) of linked data is associated with the concept of the semantic web. It is the area of the semantic web, or web of data, as defined by Tim Berners-Lee "A web of things in the world, described by data on the web". The paper highlights the continuities and differences between semantic web and web traditional, or web documents. The analysis of linked data takes place within the world of libraries, archives and museums, traditionally committed to high standards for structuring and sharing of data. The data, in fact, assume the role of generating quality information for the network. The production of linked data requires compliance with rules and the use of specific technologies and languages, especially in the case of publication of linked data in open mode. The production cycle of linked data may be the track, or a guidel...
    The Share Catalogue project is part of a larger cooperation and services sharing project among university libraries in the South of Italy, named SHARE (Scholarly heritage and access to research). Purpose of the Share Catalogue project is... more
    The Share Catalogue project is part of a larger cooperation and services sharing project among university libraries in the South of Italy, named SHARE (Scholarly heritage and access to research). Purpose of the Share Catalogue project is to enrich the remarkable Knowledge base represented by the different authorities and bibliographical catalogs coming from different collections with “new and evolving” knowledge from the Web. It will create an integrated information system which will offer end-users a unique tool to access the various libraries OPACs. At the same time each library can continue to use its own ILS applying different cataloguing rules and guidelines. Goals of the project must be distinct in different but complementary lines: the conversion of original records in open linked data, the creation of a new data architecture focused on entities identification according to the BIBFRAME model, the data enrichment with relevant information coming from external projects (e.g. Au...
    The reflection provoked by RDA produced the awareness that the flat format of MARC 21 records is inadequate in expressing the relationships between bibliographic entities that the FRBR model and RDA standard consider fundamental. RIMMF... more
    The reflection provoked by RDA produced the awareness that the flat format of MARC 21 records is inadequate in expressing the relationships between bibliographic entities that the FRBR model and RDA standard consider fundamental. RIMMF and BIBFRAME indicate to software developers a way to think in the RDA. In Italy, @Cult, a software house and bibliographic agency working for Casalini Libri, has taken on the charge of following and facilitating the transition: OliSuite/WeCat provides an implementation of RDA that integrates vocabularies and ontologies already present in the Web by structuring the information in linked open data.
    In the era of fragmentation of information, the integration and cooperation between different projects is needed to the speading of knowledge. The present work illustrates the processes to identify entities in the linked data projects of... more
    In the era of fragmentation of information, the integration and cooperation between different projects is needed to the speading of knowledge. The present work illustrates the processes to identify entities in the linked data projects of the "SHARE" family as well as the valuable contribution that Wikidata, the open knowledge base of the Wikimedia family, can give to such initiatives. In this context, the success indicators of the project, such as reuse, interoperability and influence, are observed to evaluate this cooperation and propose other possible forms of enrichment.
    ITACH@ Project, Innovative Technologies And Cultural Heritage Aggregation, intends to offer innovative tools for the development of the tourism industry and Italian culture. This paper analyzes the particular technological component... more
    ITACH@ Project, Innovative Technologies And Cultural Heritage Aggregation, intends to offer innovative tools for the development of the tourism industry and Italian culture. This paper analyzes the particular technological component defined OpLiDaF, Open Linked Data Framework, a platform aimed the creation, structuring and visualizing data in RDF/XML. The paper discusses also the different formats, with special attention to procedures and techniques of processing data in relational databases, according to the instructions provided by the W3C Working Group RDB2RDF. It is working on the development of standard languages for mapping relational data and relational database schemas into RDF and OWL. The paper aims to show the potential of a language mapping between relational database schemas and ontologies implemented in RDF(S) or OWL, used in the platform OpLiDaF: the R2O (Relational to Ontology), which enables the production of data set extensible semantics explicit and well recognized.
    RDA (Resource Description and Access), was initially released in 2010 and, as it is particularly appropriate for use by libraries, archives and museums, it replaces the Anglo-American Cataloguing Rules, Second Edition (AACR2). It provides... more
    RDA (Resource Description and Access), was initially released in 2010 and, as it is particularly appropriate for use by libraries, archives and museums, it replaces the Anglo-American Cataloguing Rules, Second Edition (AACR2). It provides a new structure for the organization of bibliographic data based on the Functional Requirements for Bibliographic Records (FRBR), with more emphasis on identifiers and relationships than on descriptions. In November 2016 the RDA Steering Committee announced steps toward progressive adoption of the IFLA Library Reference Model (LRM). RDA supports the Linked Data environment also through the representation of RDA entities, elements, relationship designators and vocabulary encoding schemas in Resource Description Framework (RDF, the syntax of the semantic web) in the RDA Registry. The paper is concerned with the application of the RDA standard within the field of Linked Data and how it may be used to improve the quality of the data produced to reach t...