Journal of Engineering Technology and Applied Sciences
The objective of this paper is to review the state-of-the-art of statistical relational learning ... more The objective of this paper is to review the state-of-the-art of statistical relational learning (SRL) models developed to deal with machine learning and data mining in relational domains in presence of missing, partially observed, and/or noisy data. It starts by giving a general overview of conventional graphical models, first-order logic and inductive logic programming approaches as needed for background. The historical development of each SRL key model is critically reviewed. The study also focuses on the practical application of SRL techniques to a broad variety of areas and their limitations.
Collaborative Filtering Using Data Mining and Analysis
People nowadays base their behavior by making choices through word of mouth, media, public opinio... more People nowadays base their behavior by making choices through word of mouth, media, public opinion, surveys, etc. One of the most prominent techniques of recommender systems is Collaborative filtering (CF), which utilizes the known preferences of several users to develop recommendation for other users. CF can introduce limitations like new-item problem, new-user problem or data sparsity, which can be mitigated by employing Statistical Relational Learning (SRLs). This review chapter presents a comprehensive scientific survey from the basic and traditional techniques to the-state-of-the-art of SRL algorithms implemented for collaborative filtering issues. Authors provide a comprehensive review of SRL for CF tasks and demonstrate strong evidence that SRL can be successfully implemented in the recommender systems domain. Finally, the chapter is concluded with a summarization of the key issues that SRLs tackle in the collaborative filtering area and suggest further open issues in order t...
This chapter proposes the application of machine learning techniques, based on first-order logic ... more This chapter proposes the application of machine learning techniques, based on first-order logic as a representation language, to the real-world application domain of document processing. First, the tasks and problems involved in document processing are presented, along with the prototypical system DOMINUS and its architecture, whose components are aimed at facing these issues. Then, a closer look is provided for the learning component of the system, and the two sub-systems that are in charge of performing supervised and unsupervised learning as a support to the system performance. Finally, some experiments are reported that assess the quality of the learning performance. This is intended to prove to researchers and practitioners of the field that first-order logic learning can be a viable solution to tackle the domain complexity, and to solve problems such as incremental evolution of the document repository.
European Conference on Artificial Intelligence, 2008
Many real-world applications of AI require both probability and first-order logic to deal with un... more Many real-world applications of AI require both probability and first-order logic to deal with uncertainty and structural complexity. Logical AI has focused mainly on handling complexity, and statis- tical AI on handling uncertainty. Markov Logic Networks (MLNs) are a powerful representation that combine Markov Networks (MNs) and first-order logic by attaching weights to first-order formulas and viewing these as templates
Inducing concept descriptions from examples has been thoroughly tackled by symbolic machine learn... more Inducing concept descriptions from examples has been thoroughly tackled by symbolic machine learning methods. However, on-line learning methods that acquire concepts from examples distributed over time, require great computational effort. This is not only due to the intrinsic complexity of the concept learning task, but also to the full memory approach that most learning systems adopt. Indeed, during learning, most of these systems consider all their past examples leading to expensive procedures for consistency verification. In this paper, we present an implementation of a partial memory approach through an advanced data storage framework and show through experiments that great savings in learning times can be achieved. We also propose and experiment different ways to select the past examples paving the way for further research in on-line partial memory learning agents.
Journal of Engineering Technology and Applied Sciences
The objective of this paper is to review the state-of-the-art of statistical relational learning ... more The objective of this paper is to review the state-of-the-art of statistical relational learning (SRL) models developed to deal with machine learning and data mining in relational domains in presence of missing, partially observed, and/or noisy data. It starts by giving a general overview of conventional graphical models, first-order logic and inductive logic programming approaches as needed for background. The historical development of each SRL key model is critically reviewed. The study also focuses on the practical application of SRL techniques to a broad variety of areas and their limitations.
Collaborative Filtering Using Data Mining and Analysis
People nowadays base their behavior by making choices through word of mouth, media, public opinio... more People nowadays base their behavior by making choices through word of mouth, media, public opinion, surveys, etc. One of the most prominent techniques of recommender systems is Collaborative filtering (CF), which utilizes the known preferences of several users to develop recommendation for other users. CF can introduce limitations like new-item problem, new-user problem or data sparsity, which can be mitigated by employing Statistical Relational Learning (SRLs). This review chapter presents a comprehensive scientific survey from the basic and traditional techniques to the-state-of-the-art of SRL algorithms implemented for collaborative filtering issues. Authors provide a comprehensive review of SRL for CF tasks and demonstrate strong evidence that SRL can be successfully implemented in the recommender systems domain. Finally, the chapter is concluded with a summarization of the key issues that SRLs tackle in the collaborative filtering area and suggest further open issues in order t...
This chapter proposes the application of machine learning techniques, based on first-order logic ... more This chapter proposes the application of machine learning techniques, based on first-order logic as a representation language, to the real-world application domain of document processing. First, the tasks and problems involved in document processing are presented, along with the prototypical system DOMINUS and its architecture, whose components are aimed at facing these issues. Then, a closer look is provided for the learning component of the system, and the two sub-systems that are in charge of performing supervised and unsupervised learning as a support to the system performance. Finally, some experiments are reported that assess the quality of the learning performance. This is intended to prove to researchers and practitioners of the field that first-order logic learning can be a viable solution to tackle the domain complexity, and to solve problems such as incremental evolution of the document repository.
European Conference on Artificial Intelligence, 2008
Many real-world applications of AI require both probability and first-order logic to deal with un... more Many real-world applications of AI require both probability and first-order logic to deal with uncertainty and structural complexity. Logical AI has focused mainly on handling complexity, and statis- tical AI on handling uncertainty. Markov Logic Networks (MLNs) are a powerful representation that combine Markov Networks (MNs) and first-order logic by attaching weights to first-order formulas and viewing these as templates
Inducing concept descriptions from examples has been thoroughly tackled by symbolic machine learn... more Inducing concept descriptions from examples has been thoroughly tackled by symbolic machine learning methods. However, on-line learning methods that acquire concepts from examples distributed over time, require great computational effort. This is not only due to the intrinsic complexity of the concept learning task, but also to the full memory approach that most learning systems adopt. Indeed, during learning, most of these systems consider all their past examples leading to expensive procedures for consistency verification. In this paper, we present an implementation of a partial memory approach through an advanced data storage framework and show through experiments that great savings in learning times can be achieved. We also propose and experiment different ways to select the past examples paving the way for further research in on-line partial memory learning agents.
Uploads
Papers by Marenglen Biba