The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800... more The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. I
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In simultaneous multithreaded systems, there are several pipeline resources that are shared among... more In simultaneous multithreaded systems, there are several pipeline resources that are shared amongst multiple threads concurrently. Some of these mutual resources to mention are the register-file and the write buffer. The Physical Register file is a critical shared resource in these types of systems due to the limited number of rename registers available for renaming.
3rd International Conference on Computer Science, Engineering and Artificial Intelligence (CSEAI ... more 3rd International Conference on Computer Science, Engineering and Artificial Intelligence (CSEAI 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Computer Engineering and AI. The Conference looks for significant contributions to all major fields of the Computer Science, Engineering and AI in theoretical, practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
3rd International Conference on Computing and Information Technology (CITE 2025) will provide an ... more 3rd International Conference on Computing and Information Technology (CITE 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Computer Engineering and Information Technology.
In this paper, the author discusses the concerns of using various wireless communications and how... more In this paper, the author discusses the concerns of using various wireless communications and how to use them safely. The author also discusses the future of the wireless industry, wireless communication security, protection methods, and techniques that could help organizations establish a secure wireless connection with their employees. The author also discusses other essential factors to learn and note when manufacturing, selling, or using wireless networks and wireless communication systems.
17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent... more 17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks & Data Communications. The conference looks for significant contributions to the Computer Networks & Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on computer Networks, Network Protocols and Wireless Networks, Data communication Technologies, and Network Security. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
The growth of Internet and other web technologies requires the development of new algorithms and... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Wireless networks have become increasingly important in today's digital age. The combination of V... more Wireless networks have become increasingly important in today's digital age. The combination of VLC and RF technologies, such as local Wi-Fi and Li-Fi networks, has emerged as a significant technology in this field. This combination enhances the coverage of weak points and strengthens the strong points of local wireless networks, resulting in higher data rates and increased coverage. However, load balancing is a crucial issue that can enhance network efficiency, especially when there are multiple access points from both networks. The selection of the appropriate access point at any given time by different nodes can significantly impact network performance. This research proposes a game theory-based method for selecting an appropriate access point, which uses multiple stages of computation and policy changes to achieve a Nash equilibrium in the game and subsequently in the network. The results show that this method can improve the quality of service in the local network by over 6% compared to previous methods such as the fuzzy method and by over 20% compared to the higher signal power selection policy.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
Tomography is important for network design and routing optimization. Prior approaches require eit... more Tomography is important for network design and routing optimization. Prior approaches require either precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation Estimation methodology named DCE with no need of synchronization and special cooperation. For the second issue we develop a passive realization mechanism merely using regular data flow without explicit bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we show that DCE measurement is highly identical with the true value. Also from test result we find that mechanism of passive realization is able to achieve both regular data transmission and purpose of tomography with excellent robustness versus different background traffic and package size.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Routing is crucial in internet “communication” and is based on routing protocols. The routing pro... more Routing is crucial in internet “communication” and is based on routing protocols. The routing protocol outlines the rules that routers use to share information between a source and a destination. In contrast, they do not move data from a source to the destination, but instead update the routing table containing data or, as we say, messages or information. Many routing protocols are available today, but they all serve the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically. Topology-based updates are made to routers, and routing tables are updated when topology changes. As a part of this research study, we will look at and analyze the protocols along with other associated research of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3). Because of the proliferation of enormous commercial networks; their design uses a variety of routing protocols. so that a large network can remain connected; Network routers are required to implement route redistribution. This research develops the three phases on the same designed network topology and assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the compatibility of the two separate versions of routing information protocol on the designed network topology in order to assess how two versions may interact with one other. This offers us the notion that there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols, as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely subnetted networks and eigrp supports major networks.
Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is co... more Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is composed of three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of Things for smart agriculture as a way of improving on food security in the country, though there is hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would be done online through connecting several devices and accessing data. Farmers are facing difficulties in trusting that the said technology has the ability to perform as expected in a specific situation or to complete a required task, i.e. if the technology will work consistently and reliably in monitoring the environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be used for decision making. There is a growing need to determine how trust in the technology influence the adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from 50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in technology model can be used to influence the adoption of IoT through trusting that the technology will be reliable and will operate as expected.Additional constructs such as security and distrust of technology can be used as reference for future research
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and... more In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as an intermediary between a store instruction’s retirement from the pipeline and the store value being written to cache. The write buffer takes a completed store instruction from the load/store queue and eventually writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles (in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s resources and deny cache hits from being written to memory, thereby degrading performance of simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and shows that system performance can be improved by using this technique.
In recent past, big data opportunities have gained much momentum to enhance knowledge management ... more In recent past, big data opportunities have gained much momentum to enhance knowledge management in organizations. However, big data due to its various properties like high volume, variety, and velocity can no longer be effectively stored and analyzed with traditional data management techniques to generate values for knowledge development. Hence, new technologies and architectures are required to store and analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for effective decision making by organizations. More specifically, it is necessary to have a single infrastructure which provides common functionality of knowledge management, and flexible enough to handle different types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and processing large volume of data can be used for efficient big data processing because it minimizes the initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual framework that can analyze big data in real time to facilitate enhanced decision making intended for competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship between big data analytics and knowledge management which are mostly deemed as two distinct entities.
MapReduce is a distributed computing model for cloud computing to process massive data. It simpli... more MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism, evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model. By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling model are verified in this paper.
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800... more The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. I
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In simultaneous multithreaded systems, there are several pipeline resources that are shared among... more In simultaneous multithreaded systems, there are several pipeline resources that are shared amongst multiple threads concurrently. Some of these mutual resources to mention are the register-file and the write buffer. The Physical Register file is a critical shared resource in these types of systems due to the limited number of rename registers available for renaming.
3rd International Conference on Computer Science, Engineering and Artificial Intelligence (CSEAI ... more 3rd International Conference on Computer Science, Engineering and Artificial Intelligence (CSEAI 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Computer Engineering and AI. The Conference looks for significant contributions to all major fields of the Computer Science, Engineering and AI in theoretical, practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
3rd International Conference on Computing and Information Technology (CITE 2025) will provide an ... more 3rd International Conference on Computing and Information Technology (CITE 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Computer Engineering and Information Technology.
In this paper, the author discusses the concerns of using various wireless communications and how... more In this paper, the author discusses the concerns of using various wireless communications and how to use them safely. The author also discusses the future of the wireless industry, wireless communication security, protection methods, and techniques that could help organizations establish a secure wireless connection with their employees. The author also discusses other essential factors to learn and note when manufacturing, selling, or using wireless networks and wireless communication systems.
17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent... more 17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks & Data Communications. The conference looks for significant contributions to the Computer Networks & Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on computer Networks, Network Protocols and Wireless Networks, Data communication Technologies, and Network Security. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
The growth of Internet and other web technologies requires the development of new algorithms and... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Wireless networks have become increasingly important in today's digital age. The combination of V... more Wireless networks have become increasingly important in today's digital age. The combination of VLC and RF technologies, such as local Wi-Fi and Li-Fi networks, has emerged as a significant technology in this field. This combination enhances the coverage of weak points and strengthens the strong points of local wireless networks, resulting in higher data rates and increased coverage. However, load balancing is a crucial issue that can enhance network efficiency, especially when there are multiple access points from both networks. The selection of the appropriate access point at any given time by different nodes can significantly impact network performance. This research proposes a game theory-based method for selecting an appropriate access point, which uses multiple stages of computation and policy changes to achieve a Nash equilibrium in the game and subsequently in the network. The results show that this method can improve the quality of service in the local network by over 6% compared to previous methods such as the fuzzy method and by over 20% compared to the higher signal power selection policy.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
Tomography is important for network design and routing optimization. Prior approaches require eit... more Tomography is important for network design and routing optimization. Prior approaches require either precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation Estimation methodology named DCE with no need of synchronization and special cooperation. For the second issue we develop a passive realization mechanism merely using regular data flow without explicit bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we show that DCE measurement is highly identical with the true value. Also from test result we find that mechanism of passive realization is able to achieve both regular data transmission and purpose of tomography with excellent robustness versus different background traffic and package size.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Routing is crucial in internet “communication” and is based on routing protocols. The routing pro... more Routing is crucial in internet “communication” and is based on routing protocols. The routing protocol outlines the rules that routers use to share information between a source and a destination. In contrast, they do not move data from a source to the destination, but instead update the routing table containing data or, as we say, messages or information. Many routing protocols are available today, but they all serve the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically. Topology-based updates are made to routers, and routing tables are updated when topology changes. As a part of this research study, we will look at and analyze the protocols along with other associated research of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3). Because of the proliferation of enormous commercial networks; their design uses a variety of routing protocols. so that a large network can remain connected; Network routers are required to implement route redistribution. This research develops the three phases on the same designed network topology and assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the compatibility of the two separate versions of routing information protocol on the designed network topology in order to assess how two versions may interact with one other. This offers us the notion that there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols, as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely subnetted networks and eigrp supports major networks.
Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is co... more Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is composed of three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of Things for smart agriculture as a way of improving on food security in the country, though there is hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would be done online through connecting several devices and accessing data. Farmers are facing difficulties in trusting that the said technology has the ability to perform as expected in a specific situation or to complete a required task, i.e. if the technology will work consistently and reliably in monitoring the environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be used for decision making. There is a growing need to determine how trust in the technology influence the adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from 50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in technology model can be used to influence the adoption of IoT through trusting that the technology will be reliable and will operate as expected.Additional constructs such as security and distrust of technology can be used as reference for future research
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and... more In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as an intermediary between a store instruction’s retirement from the pipeline and the store value being written to cache. The write buffer takes a completed store instruction from the load/store queue and eventually writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles (in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s resources and deny cache hits from being written to memory, thereby degrading performance of simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and shows that system performance can be improved by using this technique.
In recent past, big data opportunities have gained much momentum to enhance knowledge management ... more In recent past, big data opportunities have gained much momentum to enhance knowledge management in organizations. However, big data due to its various properties like high volume, variety, and velocity can no longer be effectively stored and analyzed with traditional data management techniques to generate values for knowledge development. Hence, new technologies and architectures are required to store and analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for effective decision making by organizations. More specifically, it is necessary to have a single infrastructure which provides common functionality of knowledge management, and flexible enough to handle different types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and processing large volume of data can be used for efficient big data processing because it minimizes the initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual framework that can analyze big data in real time to facilitate enhanced decision making intended for competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship between big data analytics and knowledge management which are mostly deemed as two distinct entities.
MapReduce is a distributed computing model for cloud computing to process massive data. It simpli... more MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism, evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model. By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling model are verified in this paper.
Uploads
Papers by Call for paper-International Journal of Distributed and Parallel Systems (IJDPS)
variable. I
distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Conference looks for significant contributions to all major fields of the Computer Science, Engineering and AI in theoretical, practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and
share cutting-edge development in the field.
from both academia as well as industry to meet and share cutting-edge development in the field.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
outlines the rules that routers use to share information between a source and a destination. In contrast,
they do not move data from a source to the destination, but instead update the routing table containing
data or, as we say, messages or information. Many routing protocols are available today, but they all serve
the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically.
Topology-based updates are made to routers, and routing tables are updated when topology changes. As a
part of this research study, we will look at and analyze the protocols along with other associated research
of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and
implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3).
Because of the proliferation of enormous commercial networks; their design uses a variety of routing
protocols. so that a large network can remain connected; Network routers are required to implement route
redistribution. This research develops the three phases on the same designed network topology and
assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first
phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the
compatibility of the two separate versions of routing information protocol on the designed network
topology in order to assess how two versions may interact with one other. This offers us the notion that
there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols,
as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the
network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely
subnetted networks and eigrp supports major networks.
three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of
Things for smart agriculture as a way of improving on food security in the country, though there is
hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would
be done online through connecting several devices and accessing data. Farmers are facing difficulties in
trusting that the said technology has the ability to perform as expected in a specific situation or to
complete a required task, i.e. if the technology will work consistently and reliably in monitoring the
environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be
used for decision making. There is a growing need to determine how trust in the technology influence the
adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from
50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in
technology model can be used to influence the adoption of IoT through trusting that the technology will be
reliable and will operate as expected.Additional constructs such as security and distrust of technology can
be used as reference for future research
otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as
an intermediary between a store instruction’s retirement from the pipeline and the store value being written
to cache. The write buffer takes a completed store instruction from the load/store queue and eventually
writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the
store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary
from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles
(in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s
resources and deny cache hits from being written to memory, thereby degrading performance of
simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to
cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and
shows that system performance can be improved by using this technique.
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
variable. I
distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Conference looks for significant contributions to all major fields of the Computer Science, Engineering and AI in theoretical, practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and
share cutting-edge development in the field.
from both academia as well as industry to meet and share cutting-edge development in the field.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
outlines the rules that routers use to share information between a source and a destination. In contrast,
they do not move data from a source to the destination, but instead update the routing table containing
data or, as we say, messages or information. Many routing protocols are available today, but they all serve
the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically.
Topology-based updates are made to routers, and routing tables are updated when topology changes. As a
part of this research study, we will look at and analyze the protocols along with other associated research
of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and
implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3).
Because of the proliferation of enormous commercial networks; their design uses a variety of routing
protocols. so that a large network can remain connected; Network routers are required to implement route
redistribution. This research develops the three phases on the same designed network topology and
assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first
phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the
compatibility of the two separate versions of routing information protocol on the designed network
topology in order to assess how two versions may interact with one other. This offers us the notion that
there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols,
as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the
network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely
subnetted networks and eigrp supports major networks.
three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of
Things for smart agriculture as a way of improving on food security in the country, though there is
hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would
be done online through connecting several devices and accessing data. Farmers are facing difficulties in
trusting that the said technology has the ability to perform as expected in a specific situation or to
complete a required task, i.e. if the technology will work consistently and reliably in monitoring the
environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be
used for decision making. There is a growing need to determine how trust in the technology influence the
adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from
50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in
technology model can be used to influence the adoption of IoT through trusting that the technology will be
reliable and will operate as expected.Additional constructs such as security and distrust of technology can
be used as reference for future research
otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as
an intermediary between a store instruction’s retirement from the pipeline and the store value being written
to cache. The write buffer takes a completed store instruction from the load/store queue and eventually
writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the
store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary
from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles
(in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s
resources and deny cache hits from being written to memory, thereby degrading performance of
simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to
cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and
shows that system performance can be improved by using this technique.
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.