Proceedings of the winter international synposium on Information and communication technologies, Jan 5, 2004
... The two dynamic agent types that we include in our architecture are Mobile agents(MA) and Wat... more ... The two dynamic agent types that we include in our architecture are Mobile agents(MA) and Watchdogs(WA). ... Managing Reliability of Communication in Ad-Hoc Network Tier We suggest a Markov model and an optimization approach to manage trustable ...
2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech), 2018
Big Data (BD) management and analysis has become a central-piece of serious research because of i... more Big Data (BD) management and analysis has become a central-piece of serious research because of its potential to offer many benefits to a broad spectrum of science, engineering, business, and service communities. The full realization of the benefits depends upon understanding the challenges faced by BD, and creating a data-centric computing paradigm to overcome these challenges. This paper explores a few issues along this direction.
2021 IEEE Global Engineering Education Conference (EDUCON), 2021
Data Visualization addresses the use of graphics with the purpose to obtain or transmit the knowl... more Data Visualization addresses the use of graphics with the purpose to obtain or transmit the knowledge in a easier and faster way, this is it main, and in many cases unique purpose. Since their invention Data graphics has evolved and many techniques has been developed, and in the last decades, with the definition and evolution of the Data Science, Data Visualization has become to be used profusely, in that manner that, by one side, the Data Science Body of Knowledge, DS-BoK, define five knowledge area groups that should be taught when learning Data Science, in all of them Data Visualization is taken a main role for different reasons applying each knowledge area; and by other side all the Data Science development environments, open source or proprietary, include tools for performing Data Visualizations. This paper presents the results of a research carried out with the main objective of improving the teaching of data visualization using two ways: propose a new system to classify the large amount of different graphical techniques for presenting data that can be found in the literature; and analyze using different attributes quite all the most important different tools, open source and private, that are available to develop data graphics mainly form a data visualization teaching point of view.
2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018
In order to provide a solid and stable platform for Big Data (BD) management and application deve... more In order to provide a solid and stable platform for Big Data (BD) management and application development, a generic model of BD needs to be agreed upon. There is no firm consensus on what defines BD, although a broad spectrum of competing, sometimes conflicting, ideas in terms of "V"s have been put forth. In this paper, we provide an exhaustive survey of the V-model of Big Data (BD) characteristics proposed by several academic and industry researchers, and motivate a formal approach to a selection of independent Vs that can model BD sufficiently well in any application sector.
The main objective of this research is a rigorous investigation of an architectural approach for ... more The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic (self-managing) systems, and for continuous monitoring of their quality. In this paper, we draw upon our research experience and the experience of other autonomic computing researchers to discuss the main aspects of Autonomic Systems Timed Reactive Model (AS-TRM) architecture and demonstrate its reactive, distributed and autonomic computing nature. To our knowledge, ours is the first attempt to model reactive behavior in the autonomic systems.
Category theory is considered to be a suitable means for verifying consistency of process communi... more Category theory is considered to be a suitable means for verifying consistency of process communications between design and implementation of concurrent systems. In this paper, certain features of a proposed categorical framework for the verification are studied by using a Client/Server example. In particular, Communicating Sequential Processes (CSP), Erasmus, abstraction, and category theory are used to verify the consistency of process communications between design and different implementations of the example.
Although the success of big data technologies depends highly on the quality of underlying data, n... more Although the success of big data technologies depends highly on the quality of underlying data, no standard measurement model has been yet established for assessing quantitatively the quality of big data. This research aims at investigating thoroughly the quality of big data and laying rigorous foundations for its theoretically valid measurement. We recently proposed a quality measurement hierarchy for methodically selected 10V’s of big data, based on the existing ISO/IEC standards, and NIST (National Institute of Standards and Technology) definitions and taxonomies. In this paper, pursuant to our latest research, we derive measurement information model for the most widely used 3V’s of big data: Volume, Velocity and Variety. The proposed 3V’s measures, declined into a hierarchy of 3 indicators, 2 derived measures and 4 base measures, are validated theoretically based on the representational theory of measurement. Our future research will enhance the theoretical findings presented in...
The Mobile technology evolution has greatly affected our lives, in terms of performing our daily ... more The Mobile technology evolution has greatly affected our lives, in terms of performing our daily tasks more efficiently and effectively. It includes recent advances in voice-enabled technology empowered by artificial intelligence, machine learning and natural language processing. Then again, technology evolution has created a problem for low and post-literate population in terms of understanding and using mobile devices with complex functionality. We intend to increase technology acceptance and the quality-in-use of MUI for low-and post-literate users by applying text-free approach combined with voice as a service. A new technology acceptance evaluation model UTAUT-QiU is proposed in this paper with the aim of assessing both user acceptance of text free MUI and MUI’s quality-in-use.
Lean approach highly promotes value stream between production steps in order to improve software ... more Lean approach highly promotes value stream between production steps in order to improve software development processes. The main focus of a Lean approach is to identify and eliminate process waste, called “muda”, where nonvalue-added activities must be eliminated to constantly reduce the overall cycle time. Literature proposes solutions for mitigating the waste in Lean manufacturing and Lean software development. However, an approach covering the measurement process for identifying and eliminating measurement-related muda, is missing. In order to address this, we made a parallel between the software development waste types and software measurement activities, within the metric ecosystem. We focused on using the concept of waste as a lens for identifying nonvalue-producing measurement process elements. In order to achieve this, we constructed a waste identification approach through which we identified eight software measurement wastes, and proposed guidelines for measurement waste id...
Big Data is quickly becoming a chief part of the decision-making process in both industry and aca... more Big Data is quickly becoming a chief part of the decision-making process in both industry and academia. As more and more institutions begin relying on Big Data to make strategic decisions, the quality of the underlying data comes into question. The quality of Big Data isn’t always transparent and large-scale systems may even lack its visibility, which adversely affects the credibility of the Big Data systems. Continuous monitoring and measurement of data quality is therefore paramount in assessing whether the information can serve its purpose in a particular context (such as Big Data analytics, for example). This research addresses the need for Big Data quality measurement modeling and automation by proposing a novel conceptual quality measurement framework for Big Data (MEGA) with the purpose of assessing the underlying quality characteristics of Big Data (also known as the V’s of Big Data) at each step of the Big Data Pipelines. The theoretical quality measurement models for four of the Big Data V’s (Volume, Variety, Velocity, Veracity) are currently automated; the remaining 6 V’s (Vincularity, Validity, Value, Volatility, Valence and Vitality) will be tackled in our future work. The approach is illustrated on a case study.
Proceedings of the winter international synposium on Information and communication technologies, Jan 5, 2004
... The two dynamic agent types that we include in our architecture are Mobile agents(MA) and Wat... more ... The two dynamic agent types that we include in our architecture are Mobile agents(MA) and Watchdogs(WA). ... Managing Reliability of Communication in Ad-Hoc Network Tier We suggest a Markov model and an optimization approach to manage trustable ...
2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech), 2018
Big Data (BD) management and analysis has become a central-piece of serious research because of i... more Big Data (BD) management and analysis has become a central-piece of serious research because of its potential to offer many benefits to a broad spectrum of science, engineering, business, and service communities. The full realization of the benefits depends upon understanding the challenges faced by BD, and creating a data-centric computing paradigm to overcome these challenges. This paper explores a few issues along this direction.
2021 IEEE Global Engineering Education Conference (EDUCON), 2021
Data Visualization addresses the use of graphics with the purpose to obtain or transmit the knowl... more Data Visualization addresses the use of graphics with the purpose to obtain or transmit the knowledge in a easier and faster way, this is it main, and in many cases unique purpose. Since their invention Data graphics has evolved and many techniques has been developed, and in the last decades, with the definition and evolution of the Data Science, Data Visualization has become to be used profusely, in that manner that, by one side, the Data Science Body of Knowledge, DS-BoK, define five knowledge area groups that should be taught when learning Data Science, in all of them Data Visualization is taken a main role for different reasons applying each knowledge area; and by other side all the Data Science development environments, open source or proprietary, include tools for performing Data Visualizations. This paper presents the results of a research carried out with the main objective of improving the teaching of data visualization using two ways: propose a new system to classify the large amount of different graphical techniques for presenting data that can be found in the literature; and analyze using different attributes quite all the most important different tools, open source and private, that are available to develop data graphics mainly form a data visualization teaching point of view.
2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018
In order to provide a solid and stable platform for Big Data (BD) management and application deve... more In order to provide a solid and stable platform for Big Data (BD) management and application development, a generic model of BD needs to be agreed upon. There is no firm consensus on what defines BD, although a broad spectrum of competing, sometimes conflicting, ideas in terms of "V"s have been put forth. In this paper, we provide an exhaustive survey of the V-model of Big Data (BD) characteristics proposed by several academic and industry researchers, and motivate a formal approach to a selection of independent Vs that can model BD sufficiently well in any application sector.
The main objective of this research is a rigorous investigation of an architectural approach for ... more The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic (self-managing) systems, and for continuous monitoring of their quality. In this paper, we draw upon our research experience and the experience of other autonomic computing researchers to discuss the main aspects of Autonomic Systems Timed Reactive Model (AS-TRM) architecture and demonstrate its reactive, distributed and autonomic computing nature. To our knowledge, ours is the first attempt to model reactive behavior in the autonomic systems.
Category theory is considered to be a suitable means for verifying consistency of process communi... more Category theory is considered to be a suitable means for verifying consistency of process communications between design and implementation of concurrent systems. In this paper, certain features of a proposed categorical framework for the verification are studied by using a Client/Server example. In particular, Communicating Sequential Processes (CSP), Erasmus, abstraction, and category theory are used to verify the consistency of process communications between design and different implementations of the example.
Although the success of big data technologies depends highly on the quality of underlying data, n... more Although the success of big data technologies depends highly on the quality of underlying data, no standard measurement model has been yet established for assessing quantitatively the quality of big data. This research aims at investigating thoroughly the quality of big data and laying rigorous foundations for its theoretically valid measurement. We recently proposed a quality measurement hierarchy for methodically selected 10V’s of big data, based on the existing ISO/IEC standards, and NIST (National Institute of Standards and Technology) definitions and taxonomies. In this paper, pursuant to our latest research, we derive measurement information model for the most widely used 3V’s of big data: Volume, Velocity and Variety. The proposed 3V’s measures, declined into a hierarchy of 3 indicators, 2 derived measures and 4 base measures, are validated theoretically based on the representational theory of measurement. Our future research will enhance the theoretical findings presented in...
The Mobile technology evolution has greatly affected our lives, in terms of performing our daily ... more The Mobile technology evolution has greatly affected our lives, in terms of performing our daily tasks more efficiently and effectively. It includes recent advances in voice-enabled technology empowered by artificial intelligence, machine learning and natural language processing. Then again, technology evolution has created a problem for low and post-literate population in terms of understanding and using mobile devices with complex functionality. We intend to increase technology acceptance and the quality-in-use of MUI for low-and post-literate users by applying text-free approach combined with voice as a service. A new technology acceptance evaluation model UTAUT-QiU is proposed in this paper with the aim of assessing both user acceptance of text free MUI and MUI’s quality-in-use.
Lean approach highly promotes value stream between production steps in order to improve software ... more Lean approach highly promotes value stream between production steps in order to improve software development processes. The main focus of a Lean approach is to identify and eliminate process waste, called “muda”, where nonvalue-added activities must be eliminated to constantly reduce the overall cycle time. Literature proposes solutions for mitigating the waste in Lean manufacturing and Lean software development. However, an approach covering the measurement process for identifying and eliminating measurement-related muda, is missing. In order to address this, we made a parallel between the software development waste types and software measurement activities, within the metric ecosystem. We focused on using the concept of waste as a lens for identifying nonvalue-producing measurement process elements. In order to achieve this, we constructed a waste identification approach through which we identified eight software measurement wastes, and proposed guidelines for measurement waste id...
Big Data is quickly becoming a chief part of the decision-making process in both industry and aca... more Big Data is quickly becoming a chief part of the decision-making process in both industry and academia. As more and more institutions begin relying on Big Data to make strategic decisions, the quality of the underlying data comes into question. The quality of Big Data isn’t always transparent and large-scale systems may even lack its visibility, which adversely affects the credibility of the Big Data systems. Continuous monitoring and measurement of data quality is therefore paramount in assessing whether the information can serve its purpose in a particular context (such as Big Data analytics, for example). This research addresses the need for Big Data quality measurement modeling and automation by proposing a novel conceptual quality measurement framework for Big Data (MEGA) with the purpose of assessing the underlying quality characteristics of Big Data (also known as the V’s of Big Data) at each step of the Big Data Pipelines. The theoretical quality measurement models for four of the Big Data V’s (Volume, Variety, Velocity, Veracity) are currently automated; the remaining 6 V’s (Vincularity, Validity, Value, Volatility, Valence and Vitality) will be tackled in our future work. The approach is illustrated on a case study.
Uploads