Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Gustavo De Assis Costa

IFG, Computer Science, Faculty Member
  • I'm currently a Professor at the Federal Institute of Education, Science, and Technology of Goiás, Jataí campus, Brazil. I have a Ph.D. in Electronic Engineering and Computer Sc... moreedit
  • João Gamaedit
The concrete mixture design and mix proportioning procedure, along with its influence on the compressive strength of concrete, is a well-known problem in civil engineering that requires the execution of numerous tests. With the emergence... more
The concrete mixture design and mix proportioning procedure, along with its influence on the compressive strength of concrete, is a well-known problem in civil engineering that requires the execution of numerous tests. With the emergence of modern machine learning techniques, the possibility of automating this process has become a reality. However, a significant volume of data is necessary to take advantage of existing models and algorithms. Recent literature presents different datasets, each with its own unique details, for training their models. In this paper, we integrated some of these existing datasets to improve training and, consequently, the models' results. Therefore, using this new dataset, we tested various models for the prediction task. The resulting dataset comprises 2358 records with seven input variables related to the mixture design, while the output represents the compressive strength of concrete. The dataset was subjected to several pre-processing techniques, and afterward, machine learning models, such as regressions, trees, and ensembles, were used to estimate the compressive strength. Some of these methods proved satisfactory for the prediction problem, with the best models achieving a coefficient of determination (R 2) above 80%. Furthermore, a website with the trained model was created, allowing professionals in the field to utilize the AI technique in their everyday problem-solving.
In the last few years, many works have addressed Predictive Maintenance (PdM) by the use of Machine Learning (ML) and Deep Learning (DL) solutions, especially the latter. The monitoring and logging of industrial equipment events, like... more
In the last few years, many works have addressed Predictive Maintenance (PdM) by the use of Machine Learning (ML) and Deep Learning (DL) solutions, especially the latter. The monitoring and logging of industrial equipment events, like temporal behavior and fault events—anomaly detection in time series—can be obtained from records generated by sensors installed in different parts of an industrial plant. However, such progress is incipient because we still have many challenges, and the performance of applications depends on the appropriate choice of method. This article presents a survey of existing ML and DL techniques for handling PdM in the railway industry. This survey discusses the main approaches for this specific application within a taxonomy defined by the type of task, employed methods, metrics of evaluation, the specific equipment or process, and datasets. Lastly, we conclude and outline some suggestions for future research.
Expressive query capabilities can be achieved, among many other features, mainly due to the traversal of links between different data sources. In this sense, the ability of data aggregation could be a differential to some Linked Data... more
Expressive query capabilities can be achieved, among many other features, mainly due to the traversal of links between different data sources. In this sense, the ability of data aggregation could be a differential to some Linked Data search engines when crawling the Web of Data. However, there are many challenges specially when considering different types, structures and vocabularies used in the Web. Besides that, it is difficult to guarantee the quality of data because they are usually incomplete, inconsistent and contain outliers. Trying to overcome some of these problems, many works have applied the task of Entity Resolution using different techniques and algorithms. In this paper we present an overview of our experience obtained in the construction of an approach to integrate data sets aiming to improve search results made over the LOD. In addition to a general description of the approach and its main features, we present some preliminary results, and a brief overview of ER and ...
The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many... more
The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relation...
Despite all the advances, one of the main challenges for the consolidation of the Web of Data is data integration, a key aspect to semantic web data management. Most of the solutions make use of entity resolution, a task that deals with... more
Despite all the advances, one of the main challenges for the consolidation of the Web of Data is data integration, a key aspect to semantic web data management. Most of the solutions make use of entity resolution, a task that deals with identifying and linking different manifestations of the same real world object in one or more datasets. However, data are usually incomplete, inconsistent and contain outliers and, to overcome these limitations, it is necessary to explore as much as possible the existent patterns in data. One way to extrapolate the commonly used technique of pair-wise matching is to explore the relationship structure between entities. Moreover, with the billions of RDF triples being published in the Web, scale has become a problem, posing some new challenges. Only recently some works started to consider new strategies that can deal with the problem of entity resolution in high scale datasets. In this paper we describe a Map-Reduce strategy for a relational learning a...
This chapter presents an overview of standards and solutions around healthcare semantic models and the impact of remote monitoring devices and sensors in this domain. We give a presentation about Electronic Health Record (EHR) systems,... more
This chapter presents an overview of standards and solutions around healthcare semantic models and the impact of remote monitoring devices and sensors in this domain. We give a presentation about Electronic Health Record (EHR) systems, the main technologies used to address the interoperability issue, and the role of semantic models in this context. After that, we discuss connected objects in the health domain and the importance of semantic web technologies to overcome interoperability between devices. Interoperability has been a research challenge in the last few years, and its importance is increasing due to the also increasing number of new devices and also to the existing different EHR systems. Semantic web tools have been widely adopted to face this problem despite the advances already achieved. There are just a few proposals in the literature regarding the interoperability between connected objects and EHR systems. A discussion about existing works is presented later in the chapter. Through a use case, we intend to highlight the importance of this approach during the text. We want to bring the reader some ideas about opportunities for advancement in research.