Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (173)

Search Parameters:
Keywords = metadata management

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4110 KiB  
Article
Is It Possible to Establish an Economic Trend Correlating Territorial Assessment Indicators and Earth Observation? A Critical Analysis of the Pandemic Impact in an Italian Region
by Maria Prezioso
Sustainability 2024, 16(19), 8695; https://doi.org/10.3390/su16198695 - 9 Oct 2024
Abstract
The paper is set within the methodological framework of the Territorial Impact Assessment (TIA) process, which is an instrument designed to facilitate sustainable and cohesive policy-making choices at the European level. The article is developed within the context of a European H2020-RICE cooperative [...] Read more.
The paper is set within the methodological framework of the Territorial Impact Assessment (TIA) process, which is an instrument designed to facilitate sustainable and cohesive policy-making choices at the European level. The article is developed within the context of a European H2020-RICE cooperative project, which utilises the STeMA (Sustainable Territorial Economic/Environmental Management Approach) TIA methodology to investigate the potential relationship between statistical economic indicators, specifically Gross Domestic Product, and related parameters (metadata), and Earth Observation (EO) data. The objective is to provide evidence of socioeconomic trends during the Coronavirus 2019 pandemic in the Lazio Region (Italy), with a particular focus on the metropolitan area of the Rome capital city Rome. In line with the pertinent European context and the scientific literature on the subject, the paper examines the potential for combining classical and Earth observation indicators to assess macroeconomic dimensions of development, specifically in terms of gross domestic product (GDP). The results of the analysis indicate the presence of certain correlations between grey data and EO information. The STeMA-TIA approach allows for the measurement and correlation of both qualitative and quantitative statistical indicators with typological functional areas (in accordance with European Commission-EC and Committee of Ministers responsible for Spatial/Regional Planning—CEMAT guidance) at the NUTS (Nomenclature des unités territoriales statistiques) 2 and 3 levels. This facilitates the territorialisation of information, enabling the indirect comparison of data with satellite data and economic trends. A time series of data was gathered and organised for the purpose of facilitating comparison between different periods, beginning with 2019 and extending to the present day. In order to measure and monitor the evolution of the selected territorial economies (the Lazio Region), a synthetic index (or composite indicator) was developed in the economic and epidemic dimensions. This index combines single values of indicators according to a specific STeMA methodology. It is important to note that there are some critical observations to be made about the impact on GDP, due to the discrepancy between the indicators in the two fields of observation. Full article
(This article belongs to the Special Issue Development Economics and Sustainable Economic Growth)
Show Figures

Figure 1

17 pages, 5119 KiB  
Article
Application of a Real-Time Field-Programmable Gate Array-Based Image-Processing System for Crop Monitoring in Precision Agriculture
by Sabiha Shahid Antora, Mohammad Ashik Alahe, Young K. Chang, Tri Nguyen-Quang and Brandon Heung
AgriEngineering 2024, 6(3), 3345-3361; https://doi.org/10.3390/agriengineering6030191 - 14 Sep 2024
Viewed by 552
Abstract
Precision agriculture (PA) technologies combined with remote sensors, GPS, and GIS are transforming the agricultural industry while promoting sustainable farming practices with the ability to optimize resource utilization and minimize environmental impact. However, their implementation faces challenges such as high computational costs, complexity, [...] Read more.
Precision agriculture (PA) technologies combined with remote sensors, GPS, and GIS are transforming the agricultural industry while promoting sustainable farming practices with the ability to optimize resource utilization and minimize environmental impact. However, their implementation faces challenges such as high computational costs, complexity, low image resolution, and limited GPS accuracy. These issues hinder timely delivery of prescription maps and impede farmers’ ability to make effective, on-the-spot decisions regarding farm management, especially in stress-sensitive crops. Therefore, this study proposes field programmable gate array (FPGA)-based hardware solutions and real-time kinematic GPS (RTK-GPS) to develop a real-time crop-monitoring system that can address the limitations of current PA technologies. Our proposed system uses high-accuracy RTK and real-time FPGA-based image-processing (RFIP) devices for data collection, geotagging real-time field data via Python and a camera. The acquired images are processed to extract metadata then visualized as a heat map on Google Maps, indicating green area intensity based on romaine lettuce leafage. The RFIP system showed a strong correlation (R2 = 0.9566) with a reference system and performed well in field tests, providing a Lin’s concordance correlation coefficient (CCC) of 0.8292. This study demonstrates the potential of the developed system to address current PA limitations by providing real-time, accurate data for immediate decision making. In the future, this proposed system will be integrated with autonomous farm equipment to further enhance sustainable farming practices, including real-time crop health monitoring, yield assessment, and crop disease detection. Full article
Show Figures

Figure 1

22 pages, 10596 KiB  
Article
Development of a Seafloor Litter Database and Application of Image Preprocessing Techniques for UAV-Based Detection of Seafloor Objects
by Ivan Biliškov and Vladan Papić
Electronics 2024, 13(17), 3524; https://doi.org/10.3390/electronics13173524 - 5 Sep 2024
Viewed by 563
Abstract
Marine litter poses a significant global threat to marine ecosystems, primarily driven by poor waste management, inadequate infrastructure, and irresponsible human activities. This research investigates the application of image preprocessing techniques and deep learning algorithms for the detection of seafloor objects, specifically marine [...] Read more.
Marine litter poses a significant global threat to marine ecosystems, primarily driven by poor waste management, inadequate infrastructure, and irresponsible human activities. This research investigates the application of image preprocessing techniques and deep learning algorithms for the detection of seafloor objects, specifically marine debris, using unmanned aerial vehicles (UAVs). The primary objective is to develop non-invasive methods for detecting marine litter to mitigate environmental impacts and support the health of marine ecosystems. Data was collected remotely via UAVs, resulting in a novel database of over 5000 images and 12,000 objects categorized into 31 classes, with metadata such as GPS location, wind speed, and solar parameters. Various image preprocessing methods were employed to enhance underwater object detection, with the Removal of Water Scattering (RoWS) method demonstrating superior performance. The proposed deep neural network architecture significantly improved detection precision compared to existing models. The findings indicate that appropriate databases and preprocessing methods substantially enhance the accuracy and precision of underwater object detection algorithms. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

30 pages, 3456 KiB  
Article
Towards Next-Generation Urban Decision Support Systems through AI-Powered Construction of Scientific Ontology Using Large Language Models—A Case in Optimizing Intermodal Freight Transportation
by Jose Tupayachi, Haowen Xu, Olufemi A. Omitaomu, Mustafa Can Camur, Aliza Sharmin and Xueping Li
Smart Cities 2024, 7(5), 2392-2421; https://doi.org/10.3390/smartcities7050094 - 31 Aug 2024
Viewed by 889
Abstract
The incorporation of Artificial Intelligence (AI) models into various optimization systems is on the rise. However, addressing complex urban and environmental management challenges often demands deep expertise in domain science and informatics. This expertise is essential for deriving data and simulation-driven insights that [...] Read more.
The incorporation of Artificial Intelligence (AI) models into various optimization systems is on the rise. However, addressing complex urban and environmental management challenges often demands deep expertise in domain science and informatics. This expertise is essential for deriving data and simulation-driven insights that support informed decision-making. In this context, we investigate the potential of leveraging the pre-trained Large Language Models (LLMs) to create knowledge representations for supporting operations research. By adopting ChatGPT-4 API as the reasoning core, we outline an applied workflow that encompasses natural language processing, Methontology-based prompt tuning, and Generative Pre-trained Transformer (GPT), to automate the construction of scenario-based ontologies using existing research articles and technical manuals of urban datasets and simulations. From these ontologies, knowledge graphs can be derived using widely adopted formats and protocols, guiding various tasks towards data-informed decision support. The performance of our methodology is evaluated through a comparative analysis that contrasts our AI-generated ontology with the widely recognized pizza ontology, commonly used in tutorials for popular ontology software. We conclude with a real-world case study on optimizing the complex system of multi-modal freight transportation. Our approach advances urban decision support systems by enhancing data and metadata modeling, improving data integration and simulation coupling, and guiding the development of decision support strategies and essential software components. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

18 pages, 3360 KiB  
Article
Automated Quality Control Solution for Radiographic Imaging of Lung Diseases
by Christoph Kleefeld, Jorge Patricio Castillo Lopez, Paulo R. Costa, Isabelle Fitton, Ahmed Mohamed, Csilla Pesznyak, Ricardo Ruggeri, Ioannis Tsalafoutas, Ioannis Tsougos, Jeannie Hsiu Ding Wong, Urban Zdesar, Olivera Ciraj-Bjelac and Virginia Tsapaki
J. Clin. Med. 2024, 13(16), 4967; https://doi.org/10.3390/jcm13164967 - 22 Aug 2024
Viewed by 862
Abstract
Background/Objectives: Radiography is an essential and low-cost diagnostic method in pulmonary medicine that is used for the early detection and monitoring of lung diseases. An adequate and consistent image quality (IQ) is crucial to ensure accurate diagnosis and effective patient management. This [...] Read more.
Background/Objectives: Radiography is an essential and low-cost diagnostic method in pulmonary medicine that is used for the early detection and monitoring of lung diseases. An adequate and consistent image quality (IQ) is crucial to ensure accurate diagnosis and effective patient management. This pilot study evaluates the feasibility and effectiveness of the International Atomic Energy Agency (IAEA)’s remote and automated quality control (QC) methodology, which has been tested in multiple imaging centers. Methods: The data, collected between April and December 2022, included 47 longitudinal data sets from 22 digital radiographic units. Participants submitted metadata on the radiography setup, exposure parameters, and imaging modes. The database comprised 968 exposures, each representing multiple image quality parameters and metadata of image acquisition parameters. Python scripts were developed to collate, analyze, and visualize image quality data. Results: The pilot survey identified several critical issues affecting the future implementation of the IAEA method, as follows: (1) difficulty in accessing raw images due to manufacturer restrictions, (2) variability in IQ parameters even among identical X-ray systems and image acquisitions, (3) inconsistencies in phantom construction affecting IQ values, (4) vendor-dependent DICOM tag reporting, and (5) large variability in SNR values compared to other IQ metrics, making SNR less reliable for image quality assessment. Conclusions: Cross-comparisons among radiography systems must be taken with cautious because of the dependence on phantom construction and acquisition mode variations. Awareness of these factors will generate reliable and standardized quality control programs, which are crucial for accurate and fair evaluations, especially in high-frequency chest imaging. Full article
(This article belongs to the Section Pulmonology)
Show Figures

Figure 1

62 pages, 1897 KiB  
Review
Construction of Knowledge Graphs: Current State and Challenges
by Marvin Hofer, Daniel Obraczka, Alieh Saeedi, Hanna Köpcke and Erhard Rahm
Information 2024, 15(8), 509; https://doi.org/10.3390/info15080509 - 22 Aug 2024
Viewed by 1352
Abstract
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources [...] Read more.
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources (e.g., text) and structured data sources (e.g., databases) are mostly well researched for their one-shot execution, their adoption for incremental KG updates and the interplay of the individual steps have hardly been investigated in a systematic manner so far. In this work, we first discuss the main graph models for KGs and introduce the major requirements for future KG construction pipelines. Next, we provide an overview of the necessary steps to build high-quality KGs, including cross-cutting topics such as metadata management, ontology development, and quality assurance. We then evaluate the state of the art of KG construction with respect to the introduced requirements for specific popular KGs, as well as some recent tools and strategies for KG construction. Finally, we identify areas in need of further research and improvement. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and its Applications II)
Show Figures

Figure 1

20 pages, 19393 KiB  
Article
Integrating Multimodal Generative AI and Blockchain for Enhancing Generative Design in the Early Phase of Architectural Design Process
by Adam Fitriawijaya and Taysheng Jeng
Buildings 2024, 14(8), 2533; https://doi.org/10.3390/buildings14082533 - 16 Aug 2024
Viewed by 896
Abstract
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative [...] Read more.
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative AI-driven design drawing processes in architectural design to enhance authenticity and traceability. We employed a scenario as an example to integrate generative AI and blockchain into architectural designs by using a generative AI tool and leveraging multimodal generative AI to enhance design creativity by combining textual and visual inputs. These images were stored on blockchain systems, where metadata were attached to each image before being converted into NFT format, which ensured secure data ownership and management. This research exemplifies the pragmatic fusion of generative AI and blockchain technology applied in architectural design for more transparent, secure, and effective results in the early stages of the architectural design process. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

19 pages, 520 KiB  
Article
Antimicrobial Resistance Surveillance: Data Harmonisation and Data Selection within Secondary Data Use
by Sinja Bleischwitz, Tristan Salomon Winkelmann, Yvonne Pfeifer, Martin Alexander Fischer, Niels Pfennigwerth, Jens André Hammerl, Ulrike Binsker, Jörg B. Hans, Sören Gatermann, Annemarie Käsbohrer, Guido Werner and Lothar Kreienbrock
Antibiotics 2024, 13(7), 656; https://doi.org/10.3390/antibiotics13070656 - 16 Jul 2024
Viewed by 773
Abstract
Resistance to last-resort antibiotics is a global threat to public health. Therefore, surveillance and monitoring systems for antimicrobial resistance should be established on a national and international scale. For the development of a One Health surveillance system, we collected exemplary data on carbapenem [...] Read more.
Resistance to last-resort antibiotics is a global threat to public health. Therefore, surveillance and monitoring systems for antimicrobial resistance should be established on a national and international scale. For the development of a One Health surveillance system, we collected exemplary data on carbapenem and colistin-resistant bacterial isolates from human, animal, food, and environmental sources. We pooled secondary data from routine screenings, hospital outbreak investigations, and studies on antimicrobial resistance. For a joint One Health evaluation, this study incorporates epidemiological metadata with phenotypic resistance information and molecular data on the isolate level. To harmonise the heterogeneous original information for the intended use, we developed a generic strategy. By defining and categorising variables, followed by plausibility checks, we created a catalogue for prospective data collections and applied it to our dataset, enabling us to perform preliminary descriptive statistical analyses. This study shows the complexity of data management using heterogeneous secondary data pools and gives an insight into the early stages of the development of an AMR surveillance programme using secondary data. Full article
(This article belongs to the Special Issue A One Health Approach to Antimicrobial Resistance)
Show Figures

Figure 1

20 pages, 5055 KiB  
Article
Automatic Extraction and Cluster Analysis of Natural Disaster Metadata Based on the Unified Metadata Framework
by Zongmin Wang, Xujie Shi, Haibo Yang, Bo Yu and Yingchun Cai
ISPRS Int. J. Geo-Inf. 2024, 13(6), 201; https://doi.org/10.3390/ijgi13060201 - 14 Jun 2024
Viewed by 749
Abstract
The development of information technology has led to massive, multidimensional, and heterogeneously sourced disaster data. However, there’s currently no universal metadata standard for managing natural disasters. Common pre-training models for information extraction requiring extensive training data show somewhat limited effectiveness, with limited annotated [...] Read more.
The development of information technology has led to massive, multidimensional, and heterogeneously sourced disaster data. However, there’s currently no universal metadata standard for managing natural disasters. Common pre-training models for information extraction requiring extensive training data show somewhat limited effectiveness, with limited annotated resources. This study establishes a unified natural disaster metadata standard, utilizes self-trained universal information extraction (UIE) models and Python libraries to extract metadata stored in both structured and unstructured forms, and analyzes the results using the Word2vec-Kmeans cluster algorithm. The results show that (1) the self-trained UIE model, with a learning rate of 3 × 10−4 and a batch_size of 32, significantly improves extraction results for various natural disasters by over 50%. Our optimized UIE model outperforms many other extraction methods in terms of precision, recall, and F1 scores. (2) The quality assessments of consistency, completeness, and accuracy for ten tables all exceed 0.80, with variances between the three dimensions being 0.04, 0.03, and 0.05. The overall evaluation of data items of tables also exceeds 0.80, consistent with the results at the table level. The metadata model framework constructed in this study demonstrates high-quality stability. (3) Taking the flood dataset as an example, clustering reveals five main themes with high similarity within clusters, and the differences between clusters are deemed significant relative to the differences within clusters at a significance level of 0.01. Overall, this experiment supports effective sharing of disaster data resources and enhances natural disaster emergency response efficiency. Full article
Show Figures

Figure 1

18 pages, 1182 KiB  
Article
Towards a New Business Model for Streaming Platforms Using Blockchain Technology
by Rendrikson Soares and André Araújo
Future Internet 2024, 16(6), 207; https://doi.org/10.3390/fi16060207 - 13 Jun 2024
Viewed by 894
Abstract
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing [...] Read more.
Streaming platforms have revolutionized the digital entertainment industry, but challenges and research opportunities remain to be addressed. One current concern is the lack of transparency in the business model of video streaming platforms, which makes it difficult for content creators to access viewing metrics and receive payments without the intermediary of third parties. Additionally, there is no way to trace payment transactions. This article presents a computational architecture based on blockchain technology to enable transparency in audience management and payments in video streaming platforms. Smart contracts will define the business rules of the streaming services, while middleware will integrate the metadata of the streaming platforms with the proposed computational solution. The proposed solution has been validated through data transactions on different blockchain networks and interviews with content creators from video streaming platforms. The results confirm the viability of the proposed solution in enhancing transparency and auditability in the realm of audience control services and payments on video streaming platforms. Full article
Show Figures

Figure 1

8 pages, 230 KiB  
Entry
Responsible Research Assessment and Research Information Management Systems
by Joachim Schöpfel and Otmane Azeroual
Encyclopedia 2024, 4(2), 915-922; https://doi.org/10.3390/encyclopedia4020059 - 30 May 2024
Viewed by 1673
Definition
In the context of open science, universities, research-performing and funding organizations and authorities worldwide are moving towards more responsible research assessment (RRA). In 2022, the European Coalition for Advancing Research Assessment (CoARA) published an agreement with ten commitments, including the recognition of the [...] Read more.
In the context of open science, universities, research-performing and funding organizations and authorities worldwide are moving towards more responsible research assessment (RRA). In 2022, the European Coalition for Advancing Research Assessment (CoARA) published an agreement with ten commitments, including the recognition of the “diversity of contributions to, and careers in, research”, the “focus on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators”, and the “abandon (of) inappropriate uses in research assessment of journal- and publication-based metrics”. Research assessment (RA) is essential for research of the highest quality. The transformation of assessment indicators and procedures directly affects the underlying research information management infrastructures (also called current research information systems) which collect and store metadata on research activities and outputs. This entry investigates the impact of RRA on these systems, on their development and implementation, their data model and governance, including digital ethics. Full article
(This article belongs to the Section Social Sciences)
20 pages, 2359 KiB  
Article
A Multi-Stage Approach for Cardiovascular Risk Assessment from Retinal Images Using an Amalgamation of Deep Learning and Computer Vision Techniques
by Deepthi K. Prasad, Madhura Prakash Manjunath, Meghna S. Kulkarni, Spoorthi Kullambettu, Venkatakrishnan Srinivasan, Madhulika Chakravarthi and Anusha Ramesh
Diagnostics 2024, 14(9), 928; https://doi.org/10.3390/diagnostics14090928 - 29 Apr 2024
Viewed by 1779
Abstract
Cardiovascular diseases (CVDs) are a leading cause of mortality worldwide. Early detection and effective risk assessment are crucial for implementing preventive measures and improving patient outcomes for CVDs. This work presents a novel approach to CVD risk assessment using fundus images, leveraging the [...] Read more.
Cardiovascular diseases (CVDs) are a leading cause of mortality worldwide. Early detection and effective risk assessment are crucial for implementing preventive measures and improving patient outcomes for CVDs. This work presents a novel approach to CVD risk assessment using fundus images, leveraging the inherent connection between retinal microvascular changes and systemic vascular health. This study aims to develop a predictive model for the early detection of CVDs by evaluating retinal vascular parameters. This methodology integrates both handcrafted features derived through mathematical computation and retinal vascular patterns extracted by artificial intelligence (AI) models. By combining these approaches, we seek to enhance the accuracy and reliability of CVD risk prediction in individuals. The methodology integrates state-of-the-art computer vision algorithms and AI techniques in a multi-stage architecture to extract relevant features from retinal fundus images. These features encompass a range of vascular parameters, including vessel caliber, tortuosity, and branching patterns. Additionally, a deep learning (DL)-based binary classification model is incorporated to enhance predictive accuracy. A dataset comprising fundus images and comprehensive metadata from the clinical trials conducted is utilized for training and validation. The proposed approach demonstrates promising results in the early prediction of CVD risk factors. The interpretability of the approach is enhanced through visualization techniques that highlight the regions of interest within the fundus images that are contributing to the risk predictions. Furthermore, the validation conducted in the clinical trials and the performance analysis of the proposed approach shows the potential to provide early and accurate predictions. The proposed system not only aids in risk stratification but also serves as a valuable tool for identifying vascular abnormalities that may precede overt cardiovascular events. The approach has achieved an accuracy of 85% and the findings of this study underscore the feasibility and efficacy of leveraging fundus images for cardiovascular risk assessment. As a non-invasive and cost-effective modality, fundus image analysis presents a scalable solution for population-wide screening programs. This research contributes to the evolving landscape of precision medicine by providing an innovative tool for proactive cardiovascular health management. Future work will focus on refining the solution’s robustness, exploring additional risk factors, and validating its performance in additional and diverse clinical settings. Full article
(This article belongs to the Special Issue Classifications of Diseases Using Machine Learning Algorithms)
Show Figures

Figure 1

21 pages, 2048 KiB  
Article
GvdsSQL: Heterogeneous Database Unified Access Technology for Wide-Area Environments
by Jing Shang, Limin Xiao, Zhihui Wu, Jinqian Yang, Zhiwen Xiao, Jinquan Wang, Yifei Zhang, Xuguang Chen, Jibin Wang and Huiyang Li
Electronics 2024, 13(8), 1521; https://doi.org/10.3390/electronics13081521 - 17 Apr 2024
Viewed by 748
Abstract
In a wide area environment, leveraging a unified interface for the management of diverse databases is appealing. Nonetheless, variations in access and operation across heterogeneous databases pose challenges in abstracting a unified access model while preserving specific database operations. Simultaneously, intricate deployment and [...] Read more.
In a wide area environment, leveraging a unified interface for the management of diverse databases is appealing. Nonetheless, variations in access and operation across heterogeneous databases pose challenges in abstracting a unified access model while preserving specific database operations. Simultaneously, intricate deployment and network conditions in wide-area environments create obstacles for forwarding database requests and achieving high-performance access. To address these challenges, this paper introduces a technology for unified access to heterogeneous databases in wide-area environments, termed Global Virtual Data Space SQL (GvdsSQL). Initially, this paper implements a unified data access mechanism for heterogeneous databases through metadata extraction, abstracts the unified access model, and accomplishes identification and forwarding of fundamental database operations. Secondly, the paper introduces a mechanism for expanding database operations through code generation. This mechanism achieves compatibility for special database operations by injecting rules to generate code. Lastly, this paper implements a multilevel caching mechanism for query results in wide-area databases utilizing semantic analysis. Through intelligent analysis of operation statements, it achieves precise management of cache items, enhancing wide-area access performance. The performance is improved by approximately 35% and 240% compared to similar methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 980 KiB  
Article
Implementing Federated Governance in Data Mesh Architecture
by Anton Dolhopolov, Arnaud Castelltort and Anne Laurent
Future Internet 2024, 16(4), 115; https://doi.org/10.3390/fi16040115 - 29 Mar 2024
Viewed by 1595
Abstract
Analytical data platforms have been used for decades to improve organizational performance. Starting from the data warehouses used primarily for structured data processing, through the data lakes oriented for raw data storage and post-hoc data analyses, to the data lakehouses—a combination of raw [...] Read more.
Analytical data platforms have been used for decades to improve organizational performance. Starting from the data warehouses used primarily for structured data processing, through the data lakes oriented for raw data storage and post-hoc data analyses, to the data lakehouses—a combination of raw storage and business intelligence pre-processing for improving the platform’s efficacy. But in recent years, a new architecture called Data Mesh has emerged. The main promise of this architecture is to remove the barriers between operational and analytical teams in order to boost the overall value extraction from the big data. A number of attempts have been made to formalize and implement it in existing projects. Although being defined as a socio-technical paradigm, data mesh still lacks the technology support to enable its widespread adoption. To overcome this limitation, we propose a new view of the platform requirements alongside the formal governance definition that we believe can help in the successful adoption of the data mesh. It is based on fundamental aspects such as decentralized data domains and federated computational governance. In addition, we also present a blockchain-based implementation of a mesh platform as a practical validation of our theoretical proposal. Overall, this article demonstrates a novel research direction for information system decentralization technologies. Full article
(This article belongs to the Special Issue Security in the Internet of Things (IoT))
Show Figures

Figure 1

17 pages, 16005 KiB  
Article
A Novel and Extensible Remote Sensing Collaboration Platform: Architecture Design and Prototype Implementation
by Wenqi Gao, Ninghua Chen, Jianyu Chen, Bowen Gao, Yaochen Xu, Xuhua Weng and Xinhao Jiang
ISPRS Int. J. Geo-Inf. 2024, 13(3), 83; https://doi.org/10.3390/ijgi13030083 - 8 Mar 2024
Viewed by 1623
Abstract
Geospatial data, especially remote sensing (RS) data, are of significant importance for public services and production activities. Expertise is critical in processing raw data, generating geospatial information, and acquiring domain knowledge and other remote sensing applications. However, existing geospatial service platforms are more [...] Read more.
Geospatial data, especially remote sensing (RS) data, are of significant importance for public services and production activities. Expertise is critical in processing raw data, generating geospatial information, and acquiring domain knowledge and other remote sensing applications. However, existing geospatial service platforms are more oriented towards the professional users in the implementation process and final application. Building appropriate geographic applications for non-professionals remains a challenge. In this study, a geospatial data service architecture is designed that links desktop geographic information system (GIS) software and cloud-based platforms to construct an efficient user collaboration platform. Based on the scalability of the platform, four web apps with different themes are developed. Data in the fields of ecology, oceanography, and geology are uploaded to the platform by the users. In this pilot phase, the gap between non-specialized users and experts is successfully bridged, demonstrating the platform’s powerful interactivity and visualization. The paper finally evaluates the capability of building spatial data infrastructures (SDI) based on GeoNode and discusses the current limitations. The support for three-dimensional data, the improvement of metadata creation and management, and the fostering of an open geo-community are the next steps. Full article
Show Figures

Figure 1

Back to TopTop