This paper provides a complete framework for thread migration using JPDA. In our framework neither we lose portability nor do we insert any artificial code. The system developed based on our framework needs no extra involvement of... more
This paper provides a complete framework for thread migration using JPDA. In our framework neither we lose portability nor do we insert any artificial code. The system developed based on our framework needs no extra involvement of programming to continue the whole migration system. It is a powerful autonomous system for heterogeneous environment maintaining its portability since Java Virtual Machine (JVM) is not modified.
Abstract The most important aspect of job scheduling is the ability to create a multi-tasking environment. A single user cannot keep either the CPU or the I/O devices busy at all times. Multiprogramming increases CPU utilization by... more
Abstract The most important aspect of job scheduling is the ability to create a multi-tasking environment. A single user cannot keep either the CPU or the I/O devices busy at all times. Multiprogramming increases CPU utilization by organizing jobs so that the CPU always ...
High energy consumption of cloud data centers is a matter of great concern. Dynamic consolidation of Virtual Machines (VMs) presents a significant opportunity to save energy in data centers. A VM consolidation approach uses live migration... more
High energy consumption of cloud data centers is a matter of great concern. Dynamic consolidation of Virtual Machines (VMs) presents a significant opportunity to save energy in data centers. A VM consolidation approach uses live migration of VMs so that some of the under-loaded Physical Machines (PMs) can be switched-off or put into a low-power mode. On the other hand, achieving the desired level of Quality of Service (QoS) between cloud providers and their users is critical. Therefore, the main challenge is to reduce energy consumption of data centers while satisfying QoS requirements. In this paper, we present a distributed system architecture to perform dynamic VM consolidation to reduce energy consumption of cloud data centers while maintaining the desired QoS. Since the VM consolidation problem is strictly NP-hard, we use an online optimization metaheuristic algorithm called Ant Colony System (ACS). The proposed ACS-based VM Consolidation (ACSVMC) approach finds a near-optimal ...
High energy consumption of cloud data centers is a matter of great concern. Dynamic consolidation of Virtual Machines (VMs) presents a significant opportunity to save energy in data centers. A VM consolidation approach uses live migration... more
High energy consumption of cloud data centers is a matter of great concern. Dynamic consolidation of Virtual Machines (VMs) presents a significant opportunity to save energy in data centers. A VM consolidation approach uses live migration of VMs so that some of the under-loaded Physical Machines (PMs) can be switched-off or put into a low-power mode. On the other hand, achieving the desired level of Quality of Service (QoS) between cloud providers and their users is critical. Therefore, the main challenge is to reduce energy consumption of data centers while satisfying QoS requirements. In this paper, we present a distributed system architecture to perform dynamic VM consolidation to reduce energy consumption of cloud data centers while maintaining the desired QoS. Since the VM consolidation problem is strictly NP-hard, we use an online optimization metaheuristic algorithm called Ant Colony System (ACS). The proposed ACS-based VM Consolidation (ACSVMC) approach finds a near-optimal solution based on a specified objective function. Experimental results on real workload traces show that ACS-VMC reduces energy consumption while maintaining the required performance levels in a cloud data center. It outperforms existing VM consolidation approaches in terms of energy consumption, number of VM migrations, and QoS requirements concerning performance.
We present a Forth style virtual machine architecture designed to provide for constriant based programming. We add ?CONTINUE and CHOICE commands to allow for checking constraints and making tentative choice. A choice which is later seen... more
We present a Forth style virtual machine architecture designed to provide for constriant based programming. We add ?CONTINUE and CHOICE commands to allow for checking constraints and making tentative choice. A choice which is later seen to be incompatible with a constraint provokes backtracking, which is implemented by reversible execution of the virtual machine. keywords: Forth, Virtual Machines, Constraint Based Programming, Reversible Computation.
Nowadays, people are connected to the cloud, i.e., back end servers to implement various tasks such as storage of important data, running applications of higher compatibility and so on. While performing such tasks, a number of hosts... more
Nowadays, people are connected to the cloud, i.e., back end servers to implement various tasks such as storage of important data, running applications of higher compatibility and so on. While performing such tasks, a number of hosts within the system may encounter various faults resulting in their downfall. Once a system failure occurs, it has an effect on the execution of the tasks performed by the failed components such as hosts or virtual machines. Subsequently, establishing a requirement for a cloud climate that helps in running various hosts or virtual machines effectively in a cloud computing framework independent of the fault or failure happening inside the parts of the framework.
— Server virtualization it is a game-changing technology for IT, providing efficiencies and capabilities that just aren't possible when constrained within a physical world. Even that server virtualization has continued to mature and... more
— Server virtualization it is a game-changing technology for IT, providing efficiencies and capabilities that just aren't possible when constrained within a physical world. Even that server virtualization has continued to mature and advance itself, some organizations are still not taking full advantages of the virtualization. An organization with a lot of real servers will have to allocate a lot of time for transitioning process to virtualized servers. The purpose of this article is to analyze and provide the best procedures for optimizing the speed of the conversion process using VMware Converter. We have performed tests for Linux and Windows operating systems using different hardware configurations. Concrete results and conclusions were provided for encrypted/non encrypted data transfers respectively using multiple data streams during virtualization process.
Rootkit is one of the primary concerns of network communication systems, which is related to the security and privacy of Internet users. The worldliness of malicious software (malware) used to breakdown the computer security has increased... more
Rootkit is one of the primary concerns of network communication systems, which is related to the security and privacy of Internet users. The worldliness of malicious software (malware) used to breakdown the computer security has increased exponentially in the recent years. Therefore, early detection of rootkits is on top priority to avoid the unrestrained operation of malware. Most of existing techniques only allow late rootkit detection after the malware has already been hidden by a rootkit. In this paper, we put forward a dynamic framework to detect kernel rootkits and guarantee the runtime security of guest VMs. The method is explicit to a guest Virtual Machine (VM), since it does not require any specific system information. The initial strategy is a virtual machine screen based resource arrangement component, which can limit the resource usage with given execution ensure. The end results reveal that the component that is put forth can allocate resources for a rootkit distinguishing proof on request.
Nowadays, Cloud Computing acts a major role in every field. These days, more large data centers are in service and many small cloud data centers are enlarging all over the universe. Cloud Computing is a catchword in the domain of HPC and... more
Nowadays, Cloud Computing acts a major role in every field. These days, more large data centers are in service and many small cloud data centers are enlarging all over the universe. Cloud Computing is a catchword in the domain of HPC and offers on-demand services to the resources on the internet. The VMs (Virtual Machines) specified in the cloud data centres may have different specifications and instable resource usage, which causes imbalanced resource utilization within servers. Thus, it leads to performance degradation. Hence to achieve efficient selection of VM, these challenges must be addressed and solved by using meta-heuristics algorithms. In order to process the data, the VMs are placed on the PMs (Physical Machines). There will be multiple and dynamic request of input in the IaaS(Infrastructure as a Service) framework, hence the system's responsibility is to create a VMs without knowing the types of tasks. Therefore, the fixed tasks scheduling is not right for this system. The most important research area that needs to be addressed is its performance in scheduling. The best and optimal solution is to find out in the cloud environment. Metaheuristics-based algorithms provide the near-optimal solution. In this paper, we proposed an Improved Particle Swarm Optimization algorithm to reduce the makespan and improve the throughput. We have compared our results with adaptive three-threshold energy-aware (ATEA) algorithm and PSO. The investigational results display the proposed Improved PSO algorithm will schedule and balance the load in the dynamic cloud environment better than the other approaches.
Di era teknologi yang kian berkembang seperti saat ini, kemudahan dan keandalan menggunakan teknologi juga sudah kian bagus dan bisa dioperasikan bagi orang awam sekalipun. Tak terkecuali virtual machine atau dalam bahasa Indonesia... more
Di era teknologi yang kian berkembang seperti saat ini, kemudahan dan keandalan menggunakan teknologi juga sudah kian bagus dan bisa dioperasikan bagi orang awam sekalipun.
Tak terkecuali virtual machine atau dalam bahasa Indonesia disebut dengan mesin virtual. Virtual machine ini ibaratnya sebuah sistem operasi yang berjalan di sistem operasi, atau ada sistem operasi di dalam sistem operasi.
Banyak kegunaan dan kemudahan menggunakan virtual machine, seperti efisiensi, hemat biaya, kemudahan penggunaan, bisa sebagai bahan belajar dan masih banyak lagi yang lainnya.
Akan tetapi, kekurangan yang mendasar untuk menjalankan virtual machine ini adalah dibutuhkan kapasitas memori yang cukup atau stabil untuk menjalankannya, paling tidak harus membutuhkam RAM 4 GB, agar partisi menjadi lancar dan menjalankannya tidak lelet.
With the pragmatic realization of computing as a utility, Cloud Computing is has recently emerged as a highly successful alternative IT paradigm through on-demand resource provisioning and almost perfect reliability. The rapidly growing... more
With the pragmatic realization of computing as a utility, Cloud Computing is has recently emerged as a highly successful alternative IT paradigm through on-demand resource provisioning and almost perfect reliability. The rapidly growing customer demands for computing and storage resources are responded by the Cloud providers with the deployment of large scale data centers across the globe. Efficiency and scalability of these data centers, as well as the performance of the hosted applications highly depend on the allocations of the physical resource (e.g., CPU, memory, storage, and network bandwidth). Very recently, network-aware Virtual Machine (VM) placement and migration is developing as a very promising technique for the optimization of compute-network resource utilization, energy consumption, and network traffic minimization. This chapter presents the related background information and a taxonomy that characterizes and classifies the various components of network-aware VM placement and migration techniques. An elaborate survey and comparative analysis of the state of the art techniques is also put forward. Besides highlighting the various aspects and insights of the network-aware VM placement and migration strategies and algorithms recently proposed by the research community, the survey further identifies the limitations of the existing techniques and discusses on the future research directions.
Un sistema de base de datos federada es una capa de abstracción de software que permite gestionar, como si se tratara de una única fuente, a una colección de sistemas de bases de datos componentes. La investigación encarada en este... more
Un sistema de base de datos federada es una capa de abstracción de software que permite gestionar, como si se tratara de una única fuente, a una colección de sistemas de bases de datos componentes. La investigación encarada en este trabajo, abarca la revisión de la bibliografía sobre las teorías de las bases de datos federadas y de los componentes involucrados. En base a esta revisión, se realiza el análisis y la construcción de un prototipo para demostrar la tecnología. Finalmente, se efectúa la implementación del paradigma en un caso de estudio. Se pretende exponer una perspectiva clara del modo en que una base de datos federada puede ser implementada y los conjuntos de técnicas disponibles para este propósito. El caso de estudio se implementa mediante un sistema de base de datos federadas para la gestión y la integración de diversas fuentes de datos heterogéneas, dispersas en una organización gubernamental, de tal forma que toda esa información, en base a ciertos parámetros, se pueda integrar totalmente y que, al mismo tiempo, permita la incorporación de futuras fuentes de datos.
In modern researchers, cloud parallel data processing has emerging resource that to be one of the problematic application for Infrastructure-as-a-Service (IaaS)clouds. Major Cloud processing companies include starting incorporate... more
In modern researchers, cloud parallel data processing has emerging resource that to be one of the problematic application for Infrastructure-as-a-Service (IaaS)clouds. Major Cloud processing companies include starting incorporate frameworks using VM models for parallel data processing in their resource portfolio creation too easy for a client to access these services and to set out their programs. The growing computing requires from multiple requests on the main server has led to excessive power utilization. The waiting resource in the long-term sustainability of Cloud like infrastructures in provisions of energy cost but also from cloud environmental perspective. The trouble can be addressed to require with high energy consumption resource sharing infrastructures, but in the process of resources are dynamically switch to new infrastructure. Switching is not enough to cost efficient and also need time sharing green consuming. Cloud being consists of several virtual centers like VM's under the different administrative domain, make a problem more difficult. Thus, for the reduction in energy consumption, this propose address the challenge by effectively distributing compute-intensive parallel applications on the cloud. To propose a Meta-scheduling algorithm, this exploits the heterogeneous nature of Cloud to achieve the reduction in energy consumption as the green cloud. This intent addresses these challenges by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshot transparently in a hypervisor independent fashion, ensuring high portability for different configurations.
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications.The journal... more
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications.The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and... more
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
Abstract-cloud security is one of the buzz words in cloud computing. Since virtualization is the fundamental of the cloud computing, needs to study it more deeply to avoid attacks and system failure. In this research is focused on... more
Abstract-cloud security is one of the buzz words in cloud computing. Since virtualization is the fundamental of the cloud computing, needs to study it more deeply to avoid attacks and system failure. In this research is focused on virtualization vulnerabilities. In addition it is attempted to propose a model to secure and proper mechanism to react reasonable against the detected attack by intrusion detection system. With the secured model (SVM), virtual machines will be resist more efficiency against the attacks in cloud computing. Keywords: ...
In this paper, a new adaptive reference based approach to direct torque control (DTC) method has been proposed for brushless direct current (BLDC) motor drives. Conventional DTC method uses two main reference parameters as: flux and... more
In this paper, a new adaptive reference based approach to direct torque control (DTC) method has been proposed for brushless direct current (BLDC) motor drives. Conventional DTC method uses two main reference parameters as: flux and torque. A main difference from the conventional method of it was that only one reference parameter (speed) was used to control the BLDC motor and the second control parameter (flux) was obtained from speed error through the proposed control algorithm. Thus, the DTC performance has been especially improved on systems which need variable speed and torque during operation, like electric vehicles. The dynamic models of the BLDC and the DTC method have been created on Matlab/Simulink. The proposed method has been confirmed and verified by the dynamic simulations on different working conditions. The simulation studies showed that the proposed method reduced remarkably the speed and the torque ripples when compared the conventional DTC method. Moreover, the proposed method has also very simple structure to apply the conventional DTC and its extra computational load to the controller is almost zero.
V irtualizing a computing system's physical resources to achieve improved sharing and utilization has been well established for decades. 1 Full virtualization of all system resources−including processors, memory, and I/O... more
V irtualizing a computing system's physical resources to achieve improved sharing and utilization has been well established for decades. 1 Full virtualization of all system resources−including processors, memory, and I/O devices−makes it possible to run multiple operating systems ...
Recent development of real-time technologies is focusing on the standard of Web Real Time Communication (WebRTC) Application Programming Interface (API) which aim to implement the real-time technologies to the web. The main goal is to... more
Recent development of real-time technologies is focusing on the standard of Web Real Time Communication (WebRTC) Application Programming Interface (API) which aim to implement the real-time technologies to the web. The main goal is to communicate with other people using video, audio or files in the web browser without installing any additional plugins. In the era of WebRTC development, the need of server as a WebRTC service provider is increasing, causing the new trend, virtualization. A Virtual Machine (VM) is responsible to handle a service or application. Service or application that run by the virtual machine also need a resources such as processor and memory. The efficient use of hardware resources is needed while the provider receives a high demand or run a lot of application or services. Therefore, the containerization method was born. One of the newest technologies from container is Docker. Docker provide an open platform for system administrator and developer in order to build, pack, and run application wherever as a light container. This research doing an evaluation and comparison of WebRTC Servers that run in Docker and VM based on the system performance, which is the use of resources or resource utilization. The result, Docker has more capabilities to utilize the resources efficiently than VM does. This means, Docker performances is better than VM.
The Information Technology (IT) academic courses have always been challenged with inadequate availability of hardware and software to support hands-on learning experience in teaching undergraduate systems-oriented courses. In most cases... more
The Information Technology (IT) academic courses have always been challenged with inadequate availability of hardware and software to support hands-on learning experience in teaching undergraduate systems-oriented courses. In most cases there are inadequate computers in the computing laboratories for students to demonstrate their learning experiences and acquire expertise skills. The cost of setting up these laboratories is another major challenge. Teaching and learning systems-oriented courses have been a concern to university authorities all over the world due to the scarcity of IT resources. Developing countries suffer most due to low economy power to provide adequate IT infrastructures for enhancing hands-on learning experience for enrolled IT students.
System-oriented courses such as networking, web design, computer forensics, programming and operating systems, require deployment of several computers in the computing laboratories to provide hands-on experience. Virtualisation technologies have been gaining recognitions because of proven advantages and successes. One of the major advantages of adopting virtualisation to industries and individuals has been cost reduction and optimisation of resources.
Some academic institutions in developed countries have been using virtual machines in computing laboratories to mitigate the problem of inadequate computers to support learning.
This paper aims to demonstrate the usefulness of virtual machines in supporting hands-on IT teaching and learning in undergraduate systems-oriented courses. The findings of the study would be of importance to university authorities in developing countries who are considering virtualisation as a way of mitigating the lack of IT resources.
The most of the existing virtual machine (VM) placement algorithms do not consider the affinity i.e. dependency between the virtual machines. Also, in many existing virtualized systems it is found that the network bandwidth often becomes... more
The most of the existing virtual machine (VM) placement algorithms do not consider the affinity i.e. dependency between the virtual machines. Also, in many existing virtualized systems it is found that the network bandwidth often becomes the bottleneck resource which can lead to the performance degradation of the applications deployed in the cloud. In this paper, we present an affinity aware VM colocation mechanism for cloud. The proposed mechanism find out the network affinity between the virtual machine pairs and colocate the virtual machine pairs having higher network affinity on the same physical host. The VM colocation helps to decrease the network overhead in the cloud and also helps to improve the performance of the communication intensive applications deployed in the cloud. The experimental results show that the runtime of the communication intensive applications, i.e. RUBiS and Twitter, is reduced by few seconds. Also, the network traffic between the virtual machines running the RUBiS application is reduced by 28% while the network traffic between the virtual machines running the Twitter application is reduced by 11%.