Decision tree induction is one of useful approaches for extracting classification knowledge from ... more Decision tree induction is one of useful approaches for extracting classification knowledge from set instances. Considerable part of these instances obtains from formal analysis and modeling of human activities, which has fuzzy nature. It is often the case that real-world tasks can be handled easily by humans, they are often too difficult to be handled by machines. Fuzzy logic allows us to describe this problem. Fuzzy decision tree is a very popular method for fuzzy classification. We introduced term of cumulative information estimations based on Theory of Information approach. We used these cumulative estimations for synthesis of different criteria of decision tree induction. Usage these criteria allow us to produce new type of trees. In this paper we introduce Stable Ordered Fuzzy Decision Tree (FDT). The tree is oriented to parallel and stable processing of input attributes with differing cost. Usage this FDT allows us to realize a sub-optimal classification. Such classification ...
Integrating heterogeneous data sources is a big deal for most companies, organizations, universit... more Integrating heterogeneous data sources is a big deal for most companies, organizations, universities ... Thanks to this every organization can yield the most important and relevant information for every user (in sense of employee, manager, student, professor,...) without bigger effort. Our paper describes what everything can be integrated in portal and how easily it can be done. The biggest problem for us is to choose the most effective and functional technique for solving our problems.Príspevok dokumentuje možnosti integrácie dátových zdrojov pri tvorbe rozsiahlych informačných systémov, ktoré čerpajú dáta z rôznych a popritom heterogénnych zdrojov. Článok demonštruje vybrané spôsoby integrácie tak, aby mohli byť prezentované v portlových riešeniach
Classification rules are an important tool for discovering knowledge from databases. Integrating ... more Classification rules are an important tool for discovering knowledge from databases. Integrating fuzzy logic algorithms into databases allows us to reduce uncertainty which is connected with data in databases and to increase discovered knowledge’s accuracy. In this paper, we analyze some possible variants of making classification rules from a given fuzzy decision based on cumulative information. We compare their classification accuracy with the accuracy which is reached by statistical methods and other fuzzy classification rules.
EUROCON 2005 - The International Conference on "Computer as a Tool", 2005
The reliability of the multistate system is investigated in this paper. The new class of reliabil... more The reliability of the multistate system is investigated in this paper. The new class of reliability indices is proposed. They are dynamic reliability indices. These indices estimate influence upon the multistate system reliability by the state of a system component. The multiple valued logic mathematical tools are used for calculation of dynamic reliability indices
2019 IEEE 15th International Scientific Conference on Informatics
Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.
Cost – effective maintenance with preventive replacement of oldest components....................... more Cost – effective maintenance with preventive replacement of oldest components.........................37
2019 IEEE 15th International Scientific Conference on Informatics
Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.
2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020
Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.
2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020
Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.
The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.
In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.
In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.
Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce... more Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce zahrnující vliv transakčního zpracování a paralelizmu databázového systému. Řeší také varianty (podle definice omezujících podmínek) s ohledem na replikace fragmentů databáze. Toto řešení umožní revidovat distribuci na základě skutečných hodnot obsažených v databázi.
Decision tree induction is one of useful approaches for extracting classification knowledge from ... more Decision tree induction is one of useful approaches for extracting classification knowledge from set instances. Considerable part of these instances obtains from formal analysis and modeling of human activities, which has fuzzy nature. It is often the case that real-world tasks can be handled easily by humans, they are often too difficult to be handled by machines. Fuzzy logic allows us to describe this problem. Fuzzy decision tree is a very popular method for fuzzy classification. We introduced term of cumulative information estimations based on Theory of Information approach. We used these cumulative estimations for synthesis of different criteria of decision tree induction. Usage these criteria allow us to produce new type of trees. In this paper we introduce Stable Ordered Fuzzy Decision Tree (FDT). The tree is oriented to parallel and stable processing of input attributes with differing cost. Usage this FDT allows us to realize a sub-optimal classification. Such classification ...
Integrating heterogeneous data sources is a big deal for most companies, organizations, universit... more Integrating heterogeneous data sources is a big deal for most companies, organizations, universities ... Thanks to this every organization can yield the most important and relevant information for every user (in sense of employee, manager, student, professor,...) without bigger effort. Our paper describes what everything can be integrated in portal and how easily it can be done. The biggest problem for us is to choose the most effective and functional technique for solving our problems.Príspevok dokumentuje možnosti integrácie dátových zdrojov pri tvorbe rozsiahlych informačných systémov, ktoré čerpajú dáta z rôznych a popritom heterogénnych zdrojov. Článok demonštruje vybrané spôsoby integrácie tak, aby mohli byť prezentované v portlových riešeniach
Classification rules are an important tool for discovering knowledge from databases. Integrating ... more Classification rules are an important tool for discovering knowledge from databases. Integrating fuzzy logic algorithms into databases allows us to reduce uncertainty which is connected with data in databases and to increase discovered knowledge’s accuracy. In this paper, we analyze some possible variants of making classification rules from a given fuzzy decision based on cumulative information. We compare their classification accuracy with the accuracy which is reached by statistical methods and other fuzzy classification rules.
EUROCON 2005 - The International Conference on "Computer as a Tool", 2005
The reliability of the multistate system is investigated in this paper. The new class of reliabil... more The reliability of the multistate system is investigated in this paper. The new class of reliability indices is proposed. They are dynamic reliability indices. These indices estimate influence upon the multistate system reliability by the state of a system component. The multiple valued logic mathematical tools are used for calculation of dynamic reliability indices
2019 IEEE 15th International Scientific Conference on Informatics
Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.
Cost – effective maintenance with preventive replacement of oldest components....................... more Cost – effective maintenance with preventive replacement of oldest components.........................37
2019 IEEE 15th International Scientific Conference on Informatics
Relational databases form the core part of the data management in current information systems. Th... more Relational databases form the core part of the data management in current information systems. The number of data is still rising and the structure is more and more complicated and evolving. If the data are bordered by the time spectrum, problem is even deeper. This paper deals with the relational database system architecture and proposes own techniques for optimizing data location and access to the tuples to get relevant data effectively in a proper time. Own proposed solution is based on limiting the impact of the whole table scanning necessity. Thanks to that, the performance is significantly shifted and improved, whereas, in our solution, index approach can be always used, since each data row is delimited by the primary key definition.
2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020
Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.
2020 18th International Conference on Emerging eLearning Technologies and Applications (ICETA), 2020
Data are used in a large number of different fields. Large databases with huge amounts of data co... more Data are used in a large number of different fields. Large databases with huge amounts of data come to the fore. They are a very important part of many of information systems, from commercial systems, through technical and technological systems, the web, and mobile applications to the management of scientific data in various fields. Fast access to data is, therefore, becoming increasingly important today and great emphasis is placed on improving it. Initially, the data access time may be only slightly increased working with smaller tasks, but with a larger number of larger tasks, there is a significantly higher data access time that needs to be reduced. From the point of view of efficiency, it is not appropriate or necessary to access all data, and therefore it is necessary to divide this data into smaller parts and thus create partitions, which will facilitate the execution of certain operations and bring efficiency, whether in terms of time or performance. This paper discusses partitioning, its various techniques, methods, and the benefits it brings. It compares access times to tables with and without partitions, regarding various numbers of table parts that are accessed.
The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastruct... more The European Grid Initiative (EGI) represents an effort to realize a sustainable grid infrastructure in Europe and beyond. Based on the requirements of the user communities and by combining the strength and views of the National Grid Initiatives (NGI), EGI is expected to deliver the next step towards a permanent and common grid infrastructure. The effort is currently driven by the EGI Design Study, an FP7 funded project defining the structure and functionality of the future EGI and providing support to the NGIs in their evolution. The goal is the setup of an organizational model, with the EGI Organization (EGI.org) as the glue between the national efforts, which provides seamless access to grid resources for all application domains.
In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.
In this paper we are introducing the concept for the unified data access framework. The main aim ... more In this paper we are introducing the concept for the unified data access framework. The main aim of our work is to allow the unified data access on the international level for educational, commercial and security purposes. The idea of the unified access is important in the current days mostly for the national/international security, international labor policy, market restrictions or diseases prevention.
Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce... more Článek se týká distribuce fragmentů databáze na základě matematického modelu a kriteriální funkce zahrnující vliv transakčního zpracování a paralelizmu databázového systému. Řeší také varianty (podle definice omezujících podmínek) s ohledem na replikace fragmentů databáze. Toto řešení umožní revidovat distribuci na základě skutečných hodnot obsažených v databázi.
Uploads
Papers by Karol Matiaško