The scientific model development process is often documented in an ad-hoc unstructured manner leading to difficulty in attributing provenance to data products. This can cause issues when the data owner or other interested stakeholder... more
The scientific model development process is often documented in an ad-hoc unstructured manner leading to difficulty in attributing provenance to data products. This can cause issues when the data owner or other interested stakeholder seeks to interpret the data at a later date. In this paper we discuss the design, development and evaluation of a Semantically-enhanced Electronic Lab-Notebook to facilitate the capture of provenance for the model development process, within the atmospheric chemistry community. We then proceed to consider the value of semantically enhanced provenance within the wider community processes, Semantically-enhanced Model-Experiment Evaluation Processes (SeMEEPs), that leverage data generated by experiments and computational models to conduct evaluations.
We present an automatic video processing system for tracking analysis in the Morris water escape testing videos. This system is able to extract automatically from such videos information (metadata) describing the spatio- temporal... more
We present an automatic video processing system for tracking analysis in the Morris water escape testing videos. This system is able to extract automatically from such videos information (metadata) describing the spatio- temporal trajectories of animals in the maze and the timings of behavioral events such as stopping or crossing of a target area. The specific semantic metadata are produced
This paper presents the architecture of the RAGE repository, a unique and dedicated infrastructure that provides access to a wide variety of advanced technology components for applied game development. The RAGE project, which is the... more
This paper presents the architecture of the RAGE repository, a unique and dedicated infrastructure that provides access to a wide variety of advanced technology components for applied game development. The RAGE project, which is the principal Horizon2020 research and innovation project in applied gaming, will develop software components (RAGE software assets) that are reusable across a variety of game engines, game platforms and programming languages. The RAGE repository provides storage space for these assets and associated artefacts and is designed as an asset life-cycle management system for defining, publishing, updating, searching and packaging for distribution of the assets. The repository will be embedded in a social platform for asset developers and other stakeholders. A dedicated Asset Repository Manager provides the main functionality of the repository and its integration with other systems. Additional Tools supporting the Asset Manager are also presented and discussed. When the RAGE repository is fully operational, applied game developers will be able to enhance the quality of their games through the application of selected advanced game software assets. By making available the RAGE repository system and software assets the RAGE project's aim is to stimulate the development and uptake of the Applied Games Industry In Europe.
Abstract.The Semantic Web realization depends on the availability of a critical mass ofmetadata for the web content, associated with the respective formal knowledge about the world. We claim that the Semantic Web, at its current stage of... more
Abstract.The Semantic Web realization depends on the availability of a critical mass ofmetadata for the web content, associated with the respective formal knowledge about the world. We claim that the Semantic Web, at its current stage of development, isin a state of a ,critically need of metadata ,generation and usage schemata ,that are specific, well-defined and easy to understand. This
Interactive digital TV is becoming a reality throughout the globe. The most important part of the picture is new services for the user, in terms of audio-video quality, but mostly in terms of entirely new content and interacting... more
Interactive digital TV is becoming a reality throughout the globe. The most important part of the picture is new services for the user, in terms of audio-video quality, but mostly in terms of entirely new content and interacting experience. To this end, we explore the potential of introduction of semantics in the distribution, processing and usage of the media content.
We present a method for creating natural language interfaces to databases (NLIDB) that allow for translating natural language queries into SQL. The method is domain independent, ie, it avoids the tedious process of configuring the NLIDB... more
We present a method for creating natural language interfaces to databases (NLIDB) that allow for translating natural language queries into SQL. The method is domain independent, ie, it avoids the tedious process of configuring the NLIDB for a given domain. We automatically generate the domain dictionary for query translation using semantic metadata of the database. Our semantic representation of a query is a graph including information from database metadata. The query is translated taking into account the parts of speech of ...
This paper presents the PRIAMOS research project, aiming to develop reusable core technology for the real-time extraction and management of semantics from combined multimedia sources. The ultimate goal is the use of this technology... more
This paper presents the PRIAMOS research project, aiming to develop reusable core technology for the real-time extraction and management of semantics from combined multimedia sources. The ultimate goal is the use of this technology towards adaptive decision-making systems capable of handling large bulks of multimedia data, within realistic varying environments. Research topics, such as the definition of application specific ontologies describing physical and virtual objects, the construction of knowledge base and intelligent queries inferring information from semantic metadata will be addressed in the scope of this project. Two applications will be enhanced with this core technology: «Security and Surveillance for Citizen Security» and «Smart Room for Meetings and Conferences».
In the SmartPush project professional editors add semantic metadata to information flow when the content is created. This metadata is used to filter the information flow to provide the end users with a personalized news service.... more
In the SmartPush project professional editors add semantic metadata to information flow when the content is created. This metadata is used to filter the information flow to provide the end users with a personalized news service. Personalization and delivery process is modeled as software agents, to whom the user delegates the task of sifting through incoming information. The key components of the SmartPush architecture have been implemented, and the focus in the project is shifting towards a pilot implementation and testing the ideas in practice.
In order to serve the needs of their current and fu ture users' digital libraries must provide access to the relevant data. Since recent developments are still behind user needs, describing data using metadata has proven to be cru-... more
In order to serve the needs of their current and fu ture users' digital libraries must provide access to the relevant data. Since recent developments are still behind user needs, describing data using metadata has proven to be cru- cial for building digital libraries and for providi ng effective access to the infor- mation. This paper describes the use of
A method to automatically annotate video items with se-mantic metadata is presented. The method has been devel-oped in the context of the Papyrus project to annotate docu-mentary-like broadcast videos with a set of relevant keywords using... more
A method to automatically annotate video items with se-mantic metadata is presented. The method has been devel-oped in the context of the Papyrus project to annotate docu-mentary-like broadcast videos with a set of relevant keywords using automatic speech recognition (ASR) ...
In this article we present the activities of the Ontology Working Group (OWG) under the Metabolomics Standards Initiative (MSI) umbrella. Our endeavour aims to synergise the work of several communities, where independent activities are... more
In this article we present the activities of the Ontology Working Group (OWG) under the Metabolomics Standards Initiative (MSI) umbrella. Our endeavour aims to synergise the work of several communities, where independent activities are underway to develop terminologies and databases for metabolomics investigations. We have joined forces to rise to the challenges associated with interpreting and integrating experimental process and data across disparate sources (software and databases, private and public). Our focus is ...
Semantic metadata, which provides semantic information about data, plays an important role in document management, fusion and information search. The automatic metadata generation technique has been a big challenge in the data explosion... more
Semantic metadata, which provides semantic information about data, plays an important role in document management, fusion and information search. The automatic metadata generation technique has been a big challenge in the data explosion time. There are two fundamental problems in using automatic techniques: the acquisition of target semantic metadata and the obtaining of training corpus. The first problem involves expert knowledge and the second problem needs lots of manual work, these make these two problems critical in building a successful system. In this paper, we resolve the two problems based on Wikipedia: we extract the target metadata by analyzing the table-of-contents of Wikipedia's entries and build the training corpus by analyzing the Wikipedia entry's structure and assigning the true semantic metadata. The experiment results demonstrated that both problems can be resolved
What would it take to provide a congenial and comfortable environment for finding and reading books in a digital library? To locate information we need algorithms that extract semantic metadata in forms such as keyphrases, with accuracy... more
What would it take to provide a congenial and comfortable environment for finding and reading books in a digital library? To locate information we need algorithms that extract semantic metadata in forms such as keyphrases, with accuracy and consistency comparable to human indexers. To support this we need comprehensive, detailed thesauri, automatically created, that embody contemporary language and usage. To emulate and enjoy the serendipitous adventures found in real libraries and bookstores we need browsing environments that provide readers with multiple clues in parallel: keyphrases, text excerpts, and supplementary knowledge structures—as well as the documents themselves. For readers to cherish and enjoy individual works we need to transcend the bland reading environment provided by the web by recreating the subjective impact and pleasurable experience of interacting with real books. This paper describes research that aims to achieve these goals.
In this paper we discuss the use of knowledge for the automatic ex- traction of semantic metadata from multimedia content. For the representation of knowledge we extended and enriched current general-purpose ontologies to include... more
In this paper we discuss the use of knowledge for the automatic ex- traction of semantic metadata from multimedia content. For the representation of knowledge we extended and enriched current general-purpose ontologies to include low-level visual features. More specifically, we implemented a tool that links MPEG-7 visual descriptors to high-level, domain-specific concepts. For the exploitation of this knowledge infrastructure we developed an experimentation platform, that allows us to analyze multimedia content and automatically create the associated semantic metadata, as well as to test, validate and refine the ontolo- gies built. We pursued a tight and functional integration of the knowledge base and the analysis modules putting them in a loop of constant interaction instead of being the one just a pre- or post-processing step of the other.