The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent... more 17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks & Data Communications. The conference looks for significant contributions to the Computer Networks & Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on computer Networks, Network Protocols and Wireless Networks, Data communication Technologies, and Network Security. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
The growth of Internet and other web technologies requires the development of new algorithms and... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Wireless networks have become increasingly important in today's digital age. The combination of V... more Wireless networks have become increasingly important in today's digital age. The combination of VLC and RF technologies, such as local Wi-Fi and Li-Fi networks, has emerged as a significant technology in this field. This combination enhances the coverage of weak points and strengthens the strong points of local wireless networks, resulting in higher data rates and increased coverage. However, load balancing is a crucial issue that can enhance network efficiency, especially when there are multiple access points from both networks. The selection of the appropriate access point at any given time by different nodes can significantly impact network performance. This research proposes a game theory-based method for selecting an appropriate access point, which uses multiple stages of computation and policy changes to achieve a Nash equilibrium in the game and subsequently in the network. The results show that this method can improve the quality of service in the local network by over 6% compared to previous methods such as the fuzzy method and by over 20% compared to the higher signal power selection policy.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
Tomography is important for network design and routing optimization. Prior approaches require eit... more Tomography is important for network design and routing optimization. Prior approaches require either precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation Estimation methodology named DCE with no need of synchronization and special cooperation. For the second issue we develop a passive realization mechanism merely using regular data flow without explicit bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we show that DCE measurement is highly identical with the true value. Also from test result we find that mechanism of passive realization is able to achieve both regular data transmission and purpose of tomography with excellent robustness versus different background traffic and package size.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Routing is crucial in internet “communication” and is based on routing protocols. The routing pro... more Routing is crucial in internet “communication” and is based on routing protocols. The routing protocol outlines the rules that routers use to share information between a source and a destination. In contrast, they do not move data from a source to the destination, but instead update the routing table containing data or, as we say, messages or information. Many routing protocols are available today, but they all serve the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically. Topology-based updates are made to routers, and routing tables are updated when topology changes. As a part of this research study, we will look at and analyze the protocols along with other associated research of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3). Because of the proliferation of enormous commercial networks; their design uses a variety of routing protocols. so that a large network can remain connected; Network routers are required to implement route redistribution. This research develops the three phases on the same designed network topology and assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the compatibility of the two separate versions of routing information protocol on the designed network topology in order to assess how two versions may interact with one other. This offers us the notion that there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols, as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely subnetted networks and eigrp supports major networks.
Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is co... more Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is composed of three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of Things for smart agriculture as a way of improving on food security in the country, though there is hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would be done online through connecting several devices and accessing data. Farmers are facing difficulties in trusting that the said technology has the ability to perform as expected in a specific situation or to complete a required task, i.e. if the technology will work consistently and reliably in monitoring the environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be used for decision making. There is a growing need to determine how trust in the technology influence the adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from 50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in technology model can be used to influence the adoption of IoT through trusting that the technology will be reliable and will operate as expected.Additional constructs such as security and distrust of technology can be used as reference for future research
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and... more In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as an intermediary between a store instruction’s retirement from the pipeline and the store value being written to cache. The write buffer takes a completed store instruction from the load/store queue and eventually writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles (in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s resources and deny cache hits from being written to memory, thereby degrading performance of simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and shows that system performance can be improved by using this technique.
In recent past, big data opportunities have gained much momentum to enhance knowledge management ... more In recent past, big data opportunities have gained much momentum to enhance knowledge management in organizations. However, big data due to its various properties like high volume, variety, and velocity can no longer be effectively stored and analyzed with traditional data management techniques to generate values for knowledge development. Hence, new technologies and architectures are required to store and analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for effective decision making by organizations. More specifically, it is necessary to have a single infrastructure which provides common functionality of knowledge management, and flexible enough to handle different types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and processing large volume of data can be used for efficient big data processing because it minimizes the initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual framework that can analyze big data in real time to facilitate enhanced decision making intended for competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship between big data analytics and knowledge management which are mostly deemed as two distinct entities.
MapReduce is a distributed computing model for cloud computing to process massive data. It simpli... more MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism, evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model. By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling model are verified in this paper.
Programming request with GreatFree is an efficient programming technique to implement distributed... more Programming request with GreatFree is an efficient programming technique to implement distributed polling in the cloud computing environment. GreatFree is a distributed programming environment through which diverse distributed systems can be established through programming rather than configuring or scripting. GreatFree emphasizes the importance of programming since it offers developers the opportunities to leverage their distributed knowledge and programming skills. Additionally, programming is the unique way to construct creative, adaptive and flexible systems to accommodate various distributed computing environments. With the support of GreatFree code-level Distributed Infrastructure Patterns, Distributed Operation Patterns and APIs, the difficult procedure is accomplished in a programmable, rapid and highly-patterned manner, i.e., the programming behaviors are simplified as the repeatable operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental techniques to construct distributed systems, GreatFree provides developers with relevant APIs and patterns to program requests/responses in the novel programming environment.
Big Data has introduced the challenge of storing and processing large volumes of data (text, imag... more Big Data has introduced the challenge of storing and processing large volumes of data (text, images, and videos). The success of centralised exploitation of massive data on a node is outdated, leading to the emergence of distributed storage, parallel processing and hybrid distributed storage and parallel processing frameworks. The main objective of this paper is to evaluate the load balancing and task allocation strategy of our hybrid distributed storage and parallel processing framework CLOAK-Reduce. To achieve this goal, we first performed a theoretical approach of the architecture and operation of some DHT-MapReduce. Then, we compared the data collected from their load balancing and task allocation strategy by simulation. Finally, the simulation results show that CLOAK-Reduce C5R5 replication provides better load balancing efficiency, MapReduce job submission with 10% churn or no churn.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
The Internet of Things (IoT) is a new paradigm for the development of ubiquitous computing, which... more The Internet of Things (IoT) is a new paradigm for the development of ubiquitous computing, which enables the connection of billions of heterogeneous devices with each other and the cloud. IoT is being used in many different applications, from smart homes to big data analytics. However, the IoT infrastructure is vulnerable to security threats, such as data breaches, man-in-the-middle attacks, and malicious actors. Furthermore, energy consumption is an important factor in IoT networks, as many IoT devices are battery-powered and need to conserve energy. Machine learning techniques are being increasingly used to enhance the security and energy efficiency of the IoT network. Machine learning algorithms can be used to detect malicious activities in the network and protect against them. It can also be used to identify anomalies in the network, which can be used to detect potential security threats. Furthermore, machine learning techniques can be used to optimize energy consumption in the network, by predicting energy demands and adjusting the network accordingly.
Knowledge graphs are an innovative paradigm for encoding, accessing, combining, and interpreting ... more Knowledge graphs are an innovative paradigm for encoding, accessing, combining, and interpreting data from heterogeneous and multimodal sources beyond simply a combination of technology. In a shorter time, knowledge graphs have been identified as an important component of modern search engines, intelligent assistants, and corporate intelligence. Geospatial data science, cognitive neuroscience, and machine learning emerge together in geospatial knowledge graphs symbolic of spatiality, attributes, and interactions. These knowledge graphs can be used for many geospatial applications, including geographic information retrieval, geospatial interoperability, and geographic information systems knowledge discovery. Nevertheless, Geospatial Knowledge Graphs rarely reach their maximum potential in geospatial and downstream applications since most conventional data warehouses and system elements in KGs need to account for the specialty of geographical information. A geospatial graph's linear relationship between the measurement and true values cannot effectively represent the bias component of measuring the linearity of GeoKGs. A measurement technique is linear when the relationship between the measurement and true values is a linear data function that can be analytically verified. It is a major variable because it allows data to be linearly extended across points. A linear fit of geographic knowledge graphs characterizes a relationship when the measurement system is linear. When the measurement system is linear, the connection is represented by a polynomial approximation. A linear polynomial approximation is compared with a linear data fitting to evaluate linearity.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere
17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent... more 17th International Conference on Networks & Communications (NeCoM 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks & Data Communications. The conference looks for significant contributions to the Computer Networks & Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on computer Networks, Network Protocols and Wireless Networks, Data communication Technologies, and Network Security. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
The growth of Internet and other web technologies requires the development of new algorithms and... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Wireless networks have become increasingly important in today's digital age. The combination of V... more Wireless networks have become increasingly important in today's digital age. The combination of VLC and RF technologies, such as local Wi-Fi and Li-Fi networks, has emerged as a significant technology in this field. This combination enhances the coverage of weak points and strengthens the strong points of local wireless networks, resulting in higher data rates and increased coverage. However, load balancing is a crucial issue that can enhance network efficiency, especially when there are multiple access points from both networks. The selection of the appropriate access point at any given time by different nodes can significantly impact network performance. This research proposes a game theory-based method for selecting an appropriate access point, which uses multiple stages of computation and policy changes to achieve a Nash equilibrium in the game and subsequently in the network. The results show that this method can improve the quality of service in the local network by over 6% compared to previous methods such as the fuzzy method and by over 20% compared to the higher signal power selection policy.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
Tomography is important for network design and routing optimization. Prior approaches require eit... more Tomography is important for network design and routing optimization. Prior approaches require either precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation Estimation methodology named DCE with no need of synchronization and special cooperation. For the second issue we develop a passive realization mechanism merely using regular data flow without explicit bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we show that DCE measurement is highly identical with the true value. Also from test result we find that mechanism of passive realization is able to achieve both regular data transmission and purpose of tomography with excellent robustness versus different background traffic and package size.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
Routing is crucial in internet “communication” and is based on routing protocols. The routing pro... more Routing is crucial in internet “communication” and is based on routing protocols. The routing protocol outlines the rules that routers use to share information between a source and a destination. In contrast, they do not move data from a source to the destination, but instead update the routing table containing data or, as we say, messages or information. Many routing protocols are available today, but they all serve the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically. Topology-based updates are made to routers, and routing tables are updated when topology changes. As a part of this research study, we will look at and analyze the protocols along with other associated research of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3). Because of the proliferation of enormous commercial networks; their design uses a variety of routing protocols. so that a large network can remain connected; Network routers are required to implement route redistribution. This research develops the three phases on the same designed network topology and assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the compatibility of the two separate versions of routing information protocol on the designed network topology in order to assess how two versions may interact with one other. This offers us the notion that there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols, as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely subnetted networks and eigrp supports major networks.
Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is co... more Trust in online environments is based on beliefs in the trustworthiness of a trustee, which is composed of three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of Things for smart agriculture as a way of improving on food security in the country, though there is hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would be done online through connecting several devices and accessing data. Farmers are facing difficulties in trusting that the said technology has the ability to perform as expected in a specific situation or to complete a required task, i.e. if the technology will work consistently and reliably in monitoring the environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be used for decision making. There is a growing need to determine how trust in the technology influence the adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from 50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in technology model can be used to influence the adoption of IoT through trusting that the technology will be reliable and will operate as expected.Additional constructs such as security and distrust of technology can be used as reference for future research
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere.
In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and... more In a simultaneous multithreaded system, a core’s pipeline resources are sometimes partitioned and otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as an intermediary between a store instruction’s retirement from the pipeline and the store value being written to cache. The write buffer takes a completed store instruction from the load/store queue and eventually writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles (in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s resources and deny cache hits from being written to memory, thereby degrading performance of simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and shows that system performance can be improved by using this technique.
In recent past, big data opportunities have gained much momentum to enhance knowledge management ... more In recent past, big data opportunities have gained much momentum to enhance knowledge management in organizations. However, big data due to its various properties like high volume, variety, and velocity can no longer be effectively stored and analyzed with traditional data management techniques to generate values for knowledge development. Hence, new technologies and architectures are required to store and analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for effective decision making by organizations. More specifically, it is necessary to have a single infrastructure which provides common functionality of knowledge management, and flexible enough to handle different types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and processing large volume of data can be used for efficient big data processing because it minimizes the initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual framework that can analyze big data in real time to facilitate enhanced decision making intended for competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship between big data analytics and knowledge management which are mostly deemed as two distinct entities.
MapReduce is a distributed computing model for cloud computing to process massive data. It simpli... more MapReduce is a distributed computing model for cloud computing to process massive data. It simplifies the writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism, evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model. By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling model are verified in this paper.
Programming request with GreatFree is an efficient programming technique to implement distributed... more Programming request with GreatFree is an efficient programming technique to implement distributed polling in the cloud computing environment. GreatFree is a distributed programming environment through which diverse distributed systems can be established through programming rather than configuring or scripting. GreatFree emphasizes the importance of programming since it offers developers the opportunities to leverage their distributed knowledge and programming skills. Additionally, programming is the unique way to construct creative, adaptive and flexible systems to accommodate various distributed computing environments. With the support of GreatFree code-level Distributed Infrastructure Patterns, Distributed Operation Patterns and APIs, the difficult procedure is accomplished in a programmable, rapid and highly-patterned manner, i.e., the programming behaviors are simplified as the repeatable operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental techniques to construct distributed systems, GreatFree provides developers with relevant APIs and patterns to program requests/responses in the novel programming environment.
Big Data has introduced the challenge of storing and processing large volumes of data (text, imag... more Big Data has introduced the challenge of storing and processing large volumes of data (text, images, and videos). The success of centralised exploitation of massive data on a node is outdated, leading to the emergence of distributed storage, parallel processing and hybrid distributed storage and parallel processing frameworks. The main objective of this paper is to evaluate the load balancing and task allocation strategy of our hybrid distributed storage and parallel processing framework CLOAK-Reduce. To achieve this goal, we first performed a theoretical approach of the architecture and operation of some DHT-MapReduce. Then, we compared the data collected from their load balancing and task allocation strategy by simulation. Finally, the simulation results show that CLOAK-Reduce C5R5 replication provides better load balancing efficiency, MapReduce job submission with 10% churn or no churn.
Load-balancing techniques have become a critical function in cloud storage systems that consist o... more Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged
The Internet of Things (IoT) is a new paradigm for the development of ubiquitous computing, which... more The Internet of Things (IoT) is a new paradigm for the development of ubiquitous computing, which enables the connection of billions of heterogeneous devices with each other and the cloud. IoT is being used in many different applications, from smart homes to big data analytics. However, the IoT infrastructure is vulnerable to security threats, such as data breaches, man-in-the-middle attacks, and malicious actors. Furthermore, energy consumption is an important factor in IoT networks, as many IoT devices are battery-powered and need to conserve energy. Machine learning techniques are being increasingly used to enhance the security and energy efficiency of the IoT network. Machine learning algorithms can be used to detect malicious activities in the network and protect against them. It can also be used to identify anomalies in the network, which can be used to detect potential security threats. Furthermore, machine learning techniques can be used to optimize energy consumption in the network, by predicting energy demands and adjusting the network accordingly.
Knowledge graphs are an innovative paradigm for encoding, accessing, combining, and interpreting ... more Knowledge graphs are an innovative paradigm for encoding, accessing, combining, and interpreting data from heterogeneous and multimodal sources beyond simply a combination of technology. In a shorter time, knowledge graphs have been identified as an important component of modern search engines, intelligent assistants, and corporate intelligence. Geospatial data science, cognitive neuroscience, and machine learning emerge together in geospatial knowledge graphs symbolic of spatiality, attributes, and interactions. These knowledge graphs can be used for many geospatial applications, including geographic information retrieval, geospatial interoperability, and geographic information systems knowledge discovery. Nevertheless, Geospatial Knowledge Graphs rarely reach their maximum potential in geospatial and downstream applications since most conventional data warehouses and system elements in KGs need to account for the specialty of geographical information. A geospatial graph's linear relationship between the measurement and true values cannot effectively represent the bias component of measuring the linearity of GeoKGs. A measurement technique is linear when the relationship between the measurement and true values is a linear data function that can be analytically verified. It is a major variable because it allows data to be linearly extended across points. A linear fit of geographic knowledge graphs characterizes a relationship when the measurement system is linear. When the measurement system is linear, the connection is represented by a polynomial approximation. A linear polynomial approximation is compared with a linear data fitting to evaluate linearity.
The growth of Internet and other web technologies requires the development of new algorithms and ... more The growth of Internet and other web technologies requires the development of new algorithms and architectures for parallel and distributed computing. International journal of Distributed and parallel systems is a bi monthly open access peer-reviewed journal aims to publish high quality scientific papers arising from original research and development from the international community in the areas of parallel and distributed systems. IJDPS serves as a platform for engineers and researchers to present new ideas and system technology, with an interactive and friendly, but strongly professional atmosphere
2nd International Conference on Computer Science, Information Technology & AI (CSITAI 2024) will ... more 2nd International Conference on Computer Science, Information Technology & AI (CSITAI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Information Technology and Artificial Intelligence. The Conference looks for significant contributions to all major fields of the Computer Science, Information Technology and AI in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
16th International Conference on Web services & Semantic Technology (WeST 2024) will provide an e... more 16th International Conference on Web services & Semantic Technology (WeST 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of web & semantic technology. The growth of the World-Wide Web today is simply phenomenal. It continues to grow rapidly and new technologies, applications are being developed to support end users’ modern life. Semantic Technologies are designed to extend the capabilities of information on the Web and enterprise databases to be networked in meaningful ways.
10th International Conference of Networks, Communications, Wireless and Mobile Computing (NCWC 20... more 10th International Conference of Networks, Communications, Wireless and Mobile Computing (NCWC 2024) looks for significant contributions to the Computer Networks, Communications, wireless and mobile computing for wired and wireless networks in theoretical and practical aspects. Original papers are invited on computer Networks, network protocols and wireless networks, Data communication Technologies, network security and mobile computing. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
2nd International Conference on Computer Science, Information Technology & AI (CSITAI 2024) will ... more 2nd International Conference on Computer Science, Information Technology & AI (CSITAI 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Information Technology and Artificial Intelligence. The Conference looks for significant contributions to all major fields of the Computer Science, Information Technology and AI in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
4th International Conference on Computing and Information Technology Trends (CCITT 2025) will pro... more 4th International Conference on Computing and Information Technology Trends (CCITT 2025) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computing and Information Technology Trends. The Conference looks for significant contributions to all major fields of the Computer Science, Compute Engineering, Information Technology and Trends in theoretical and practical aspects.
5th International Conference on Networks & IOT (NeTIOT 2024) will provide an excellent internati... more 5th International Conference on Networks & IOT (NeTIOT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks & IOT. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cuttingedge development in the field.
11th International Conference on Computer Networks & Data Communications (CNDC 2024) will provid... more 11th International Conference on Computer Networks & Data Communications (CNDC 2024) will provide an excellent international forum for sharing knowledge and results in theory and applications of Computer Networks and Data Communications. The Conference looks for significant contributions to the Computer Networks and Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on Networks, wireless and Mobile, Protocols and wireless networks and security.
8th International Conference on Networks & Communications (NETWORKS 2024) will provide an excelle... more 8th International Conference on Networks & Communications (NETWORKS 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Networks and Communications. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
5th International Conference on IoT, Blockchain & Cloud Computing (IBCOM 2024) will provide an ex... more 5th International Conference on IoT, Blockchain & Cloud Computing (IBCOM 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of IoT, Blockchain and Cloud Computing.
16th International Conference on Wireless, Mobile Network & Applications (WiMoA 2024) is dedicate... more 16th International Conference on Wireless, Mobile Network & Applications (WiMoA 2024) is dedicated to address the challenges in the areas of wireless, mobile network issues & its applications. The Conference looks for significant contributions to the Wireless & Mobile computing in theoretical and practical aspects. The Wireless & Mobile computing domain emerges from the integration among Personal Computing, Networks, Communication Technologies, Cellular Technology and the Internet Technology. The modern applications are emerging in the area of Mobile ad hoc networks and Sensor Networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless & Mobile computing concepts and establishing new collaborations in these areas.
13th International Conference of Networks and Communications (NECO 2024) looks for significant co... more 13th International Conference of Networks and Communications (NECO 2024) looks for significant contributions to the Computer Networks & Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on Computer Networks, Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
15th International Conference on Ubiquitous Computing (UBIC 2024) will provide an excellent inter... more 15th International Conference on Ubiquitous Computing (UBIC 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Ubiquitous Computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
16th International Conference on Grid Computing (GridCom 2024) Service-oriented computing is a po... more 16th International Conference on Grid Computing (GridCom 2024) Service-oriented computing is a popular design methodology for large scale business computing systems. Grid computing enables the sharing of distributed computing and data resources such as processing, networking and storage capacity to create a cohesive resource environment for executing distributed applications in service-oriented computing. Grid computing represents more business-oriented orchestration of pretty homogeneous and powerful distributed computing resources to optimize the execution of time consuming process as well. Grid computing have received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational in e-Science and businesses. The objective of the meeting is to serve as both the premier venue for presenting foremost research results in the area and as a forum for introducing and exploring new concepts.
16th International Conference on Wireless & Mobile Network (WiMo 2024) is dedicated to
addressin... more 16th International Conference on Wireless & Mobile Network (WiMo 2024) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas
Uploads
Papers by Call for paper-International Journal of Distributed and Parallel Systems (IJDPS)
from both academia as well as industry to meet and share cutting-edge development in the field.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
outlines the rules that routers use to share information between a source and a destination. In contrast,
they do not move data from a source to the destination, but instead update the routing table containing
data or, as we say, messages or information. Many routing protocols are available today, but they all serve
the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically.
Topology-based updates are made to routers, and routing tables are updated when topology changes. As a
part of this research study, we will look at and analyze the protocols along with other associated research
of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and
implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3).
Because of the proliferation of enormous commercial networks; their design uses a variety of routing
protocols. so that a large network can remain connected; Network routers are required to implement route
redistribution. This research develops the three phases on the same designed network topology and
assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first
phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the
compatibility of the two separate versions of routing information protocol on the designed network
topology in order to assess how two versions may interact with one other. This offers us the notion that
there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols,
as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the
network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely
subnetted networks and eigrp supports major networks.
three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of
Things for smart agriculture as a way of improving on food security in the country, though there is
hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would
be done online through connecting several devices and accessing data. Farmers are facing difficulties in
trusting that the said technology has the ability to perform as expected in a specific situation or to
complete a required task, i.e. if the technology will work consistently and reliably in monitoring the
environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be
used for decision making. There is a growing need to determine how trust in the technology influence the
adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from
50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in
technology model can be used to influence the adoption of IoT through trusting that the technology will be
reliable and will operate as expected.Additional constructs such as security and distrust of technology can
be used as reference for future research
otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as
an intermediary between a store instruction’s retirement from the pipeline and the store value being written
to cache. The write buffer takes a completed store instruction from the load/store queue and eventually
writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the
store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary
from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles
(in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s
resources and deny cache hits from being written to memory, thereby degrading performance of
simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to
cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and
shows that system performance can be improved by using this technique.
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
polling in the cloud computing environment. GreatFree is a distributed programming environment through
which diverse distributed systems can be established through programming rather than configuring or
scripting. GreatFree emphasizes the importance of programming since it offers developers the
opportunities to leverage their distributed knowledge and programming skills. Additionally, programming
is the unique way to construct creative, adaptive and flexible systems to accommodate various distributed
computing environments. With the support of GreatFree code-level Distributed Infrastructure Patterns,
Distributed Operation Patterns and APIs, the difficult procedure is accomplished in a programmable,
rapid and highly-patterned manner, i.e., the programming behaviors are simplified as the repeatable
operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental techniques to
construct distributed systems, GreatFree provides developers with relevant APIs and patterns to program
requests/responses in the novel programming environment.
videos). The success of centralised exploitation of massive data on a node is outdated, leading to the
emergence of distributed storage, parallel processing and hybrid distributed storage and parallel
processing frameworks.
The main objective of this paper is to evaluate the load balancing and task allocation strategy of our
hybrid distributed storage and parallel processing framework CLOAK-Reduce. To achieve this goal, we
first performed a theoretical approach of the architecture and operation of some DHT-MapReduce. Then,
we compared the data collected from their load balancing and task allocation strategy by simulation.
Finally, the simulation results show that CLOAK-Reduce C5R5 replication provides better load balancing
efficiency, MapReduce job submission with 10% churn or no churn.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
from both academia as well as industry to meet and share cutting-edge development in the field.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
outlines the rules that routers use to share information between a source and a destination. In contrast,
they do not move data from a source to the destination, but instead update the routing table containing
data or, as we say, messages or information. Many routing protocols are available today, but they all serve
the same goal-static and dynamic routing protocols. Dynamic routing is carried out automatically.
Topology-based updates are made to routers, and routing tables are updated when topology changes. As a
part of this research study, we will look at and analyze the protocols along with other associated research
of RIP, EIGRP, and OSPF. In this study, we provide a practical analysis report by designing and
implementing numerous LAN topology scenarios using the emulator (Graphical Network Simulator-3).
Because of the proliferation of enormous commercial networks; their design uses a variety of routing
protocols. so that a large network can remain connected; Network routers are required to implement route
redistribution. This research develops the three phases on the same designed network topology and
assesses the presentation of route redistribution across three routing technologies. RIP, EIGRP is the first
phase, EIGRP, OSPF is the second, and RIP, OSPF is the third. This research also analyses the
compatibility of the two separate versions of routing information protocol on the designed network
topology in order to assess how two versions may interact with one other. This offers us the notion that
there is a way out of it when the same problem emerges associated to EIGRP, OSPF, or BGP if protocols,
as we know, Version-1 and Version-2 do not interact to one other. In this research, we also design the
network lAN architecture and setup by utilising GNS-3 in order to evaluate how rip supports merely
subnetted networks and eigrp supports major networks.
three distinct dimensions - integrity, ability, and benevolence. Zimbabwe has slowly adopted Internet of
Things for smart agriculture as a way of improving on food security in the country, though there is
hesitancy by most farmers citing trust issues as monitoring of crops, animals and farm equipment’s would
be done online through connecting several devices and accessing data. Farmers are facing difficulties in
trusting that the said technology has the ability to perform as expected in a specific situation or to
complete a required task, i.e. if the technology will work consistently and reliably in monitoring the
environment, nutrients, temperatures and equipment status. The integrity of the collected data as it will be
used for decision making. There is a growing need to determine how trust in the technology influence the
adoption of IoT for smart agriculture in Zimbabwe. The mixed methodology was used to gather data from
50 A2 model farmers randomly sampled in Zimbabwe. The findings revealed that McKnight etal. trust in
technology model can be used to influence the adoption of IoT through trusting that the technology will be
reliable and will operate as expected.Additional constructs such as security and distrust of technology can
be used as reference for future research
otherwise shared amongst numerous active threads. One mutual resource is the write buffer, which acts as
an intermediary between a store instruction’s retirement from the pipeline and the store value being written
to cache. The write buffer takes a completed store instruction from the load/store queue and eventually
writes the value to the level-one data cache. Once a store is buffered with a write-allocate cache policy, the
store must remain in the write buffer until its cache block is in level-one data cache. This latency may vary
from as little as a single clock cycle (in the case of a level-one cache hit) to several hundred clock cycles
(in the case of a cache miss). This paper shows that cache misses routinely dominate the write buffer’s
resources and deny cache hits from being written to memory, thereby degrading performance of
simultaneous multithreaded systems. This paper proposes a technique to reduce denial of resources to
cache hits by limiting the number of cache misses that may concurrently reside in the write buffer and
shows that system performance can be improved by using this technique.
organizations. However, big data due to its various properties like high volume, variety, and velocity can
no longer be effectively stored and analyzed with traditional data management techniques to generate
values for knowledge development. Hence, new technologies and architectures are required to store and
analyze this big data through advanced data analytics and in turn generate vital real-time knowledge for
effective decision making by organizations. More specifically, it is necessary to have a single infrastructure
which provides common functionality of knowledge management, and flexible enough to handle different
types of big data and big data analysis tasks. Cloud computing infrastructures capable of storing and
processing large volume of data can be used for efficient big data processing because it minimizes the
initial cost for the large-scale computing infrastructure demanded by big data analytics. This paper aims to
explore the impact of big data analytics on knowledge management and proposes a cloud-based conceptual
framework that can analyze big data in real time to facilitate enhanced decision making intended for
competitive advantage. Thus, this framework will pave the way for organizations to explore the relationship
between big data analytics and knowledge management which are mostly deemed as two distinct entities.
writing of distributed parallel programs. For the fault-tolerant technology in the MapReduce programming
model, tasks may be allocated to nodes with low reliability. It causes the task to be reexecuted, wasting time
and resources. This paper proposes a reliability task scheduling strategy with a failure recovery mechanism,
evaluates the trustworthiness of resource nodes in the cloud environment and builds a trustworthiness model.
By using the simulation platform CloudSim, the stability of the task scheduling algorithm and scheduling
model are verified in this paper.
polling in the cloud computing environment. GreatFree is a distributed programming environment through
which diverse distributed systems can be established through programming rather than configuring or
scripting. GreatFree emphasizes the importance of programming since it offers developers the
opportunities to leverage their distributed knowledge and programming skills. Additionally, programming
is the unique way to construct creative, adaptive and flexible systems to accommodate various distributed
computing environments. With the support of GreatFree code-level Distributed Infrastructure Patterns,
Distributed Operation Patterns and APIs, the difficult procedure is accomplished in a programmable,
rapid and highly-patterned manner, i.e., the programming behaviors are simplified as the repeatable
operation of Copy-Paste-Replace. Since distributed polling is one of the fundamental techniques to
construct distributed systems, GreatFree provides developers with relevant APIs and patterns to program
requests/responses in the novel programming environment.
videos). The success of centralised exploitation of massive data on a node is outdated, leading to the
emergence of distributed storage, parallel processing and hybrid distributed storage and parallel
processing frameworks.
The main objective of this paper is to evaluate the load balancing and task allocation strategy of our
hybrid distributed storage and parallel processing framework CLOAK-Reduce. To achieve this goal, we
first performed a theoretical approach of the architecture and operation of some DHT-MapReduce. Then,
we compared the data collected from their load balancing and task allocation strategy by simulation.
Finally, the simulation results show that CLOAK-Reduce C5R5 replication provides better load balancing
efficiency, MapReduce job submission with 10% churn or no churn.
complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any
load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the
diameter of the network and the communication overhead increased. Therefore, this paper presents an
approach aims at scaling the system out not up - in other words, allowing the system to be expanded by
adding more nodes without the need to increase the power of each node while at the same time increasing
the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of
the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as
well as computer simulations, and it was compared with the centralized approach and the original diffusion
technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain
nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we
conclude that this approach would have an advantage of being fair, simple and no node is privileged
and applications of Computer Science, Information Technology and Artificial Intelligence. The Conference looks for significant contributions to all major fields of the Computer Science, Information Technology and AI in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
applications are being developed to support end users’ modern life. Semantic Technologies are designed to extend the capabilities of information on the Web and enterprise databases to be networked in meaningful ways.
protocols and wireless networks, Data communication Technologies, network security and mobile computing. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
and applications of Computer Science, Information Technology and Artificial Intelligence. The Conference looks for significant contributions to all major fields of the Computer Science, Information Technology and AI in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Compute Engineering, Information Technology and Trends in theoretical and practical aspects.
researchers and practitioners from both academia as well as industry to meet and share cuttingedge development in the field.
significant contributions to the Computer Networks and Communications for wired and wireless networks in theoretical and practical aspects. Original papers are invited on Networks, wireless and Mobile, Protocols and wireless networks and security.
context of mobile, wireless, ad-hoc and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless & Mobile
computing concepts and establishing new collaborations in these areas.
Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new
collaborations in these areas.
and high performance computational applications in real life. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
sharing of distributed computing and data resources such as processing, networking and storage capacity to create a cohesive resource environment for executing distributed applications in service-oriented
computing. Grid computing represents more business-oriented orchestration of pretty homogeneous and powerful distributed computing resources to optimize the execution of time consuming process as well.
Grid computing have received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational in e-Science and businesses. The objective of
the meeting is to serve as both the premier venue for presenting foremost research results in the area and as a forum for introducing and exploring new concepts.
addressing the challenges in the areas of wireless & mobile networks. The Conference looks for
significant contributions to the Wireless and Mobile computing in theoretical and practical
aspects. The Wireless and Mobile computing domain emerges from the integration among
personal computing, networks, communication technologies, cellular technology, and the
Internet Technology. The modern applications are emerging in the area of mobile ad hoc
networks and sensor networks. This Conference is intended to cover contributions in both the
design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of
this Conference is to bring together researchers and practitioners from academia and industry to
focus on advanced wireless and Mobile computing concepts and establishing new collaborations
in these areas