2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), 2021
Cloud is most recognized environment of computing due to the widely availability of its services ... more Cloud is most recognized environment of computing due to the widely availability of its services to the users. Cloud data centers have adopted virtualization of resources technically known as virtual machines (VMs) for the effective and efficient computation of services. But power consumption through the cloud data centers (CDCs) highlighted as one of the major issues responsible to reduce the quality of services of Cloud computing. Because a large number of electronic resources like processing nodes, servers, disk storage, and networking nodes (i.e. switch/routers) in CDC consume a high degree of energy during computation. Energy is also consumed by cooling plants to cool the data centers as it produces a huge amount of heat (CO2) surrounding it. Due to the high energy consumption, the cloud providers pay a high cost of energy computation and it also contributes more carbon emissions to the atmosphere. Reducing energy consumption and CO2 emission is a major challenge in turning the computation from cloud computing to green computing. So, our main focus in this paper is to develop an energy-saving approach for cloud computing based on resource management. Due to variations in the request of services by the users, the demand for resources also varies during the computation. The amount of users’ request are not always same for all 24 hours in a day. Some period of time it is very low and at that period of time, many resources become idle and consume some fixed amount of energy that is a wastage of energy. In this manner, the proposed ESACR approach first define active (required) and idle (non-required) resources as per the users’ request for services at any instance; after that, it converts the idle resources into an energy-saving mode (i.e., turn OFF) unless it is not required for the service of users’ request. So that the saving of energy will increased, if at any instance all non-useable resources is define in OFF mode.
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
Scheduling of jobs and resources are two essential features of the Grid computing infrastructure.... more Scheduling of jobs and resources are two essential features of the Grid computing infrastructure. To improve the global throughput of these environments, an effective and efficient task allocation algorithm is fundamentally required. However for Grid computation we have some challenges like heterogeneity, scalability and adaptability. So to handle these challenges we used a dynamic hierarchical model to represent the architecture of the grid computing system in order to manage jobs and resources. This model was characterized asit defines the hierarchal structure of physical and virtual computing elements and it also supports heterogeneity and scalability of computing elements. In this paper, we proposed a DRR (Dynamic Round Robin) job scheduling algorithm for grid computing system with preemptable jobs allocation (Round Robin) using dynamic time quantum according to the priority of the jobs, which is to be change with every round of execution of the jobs. The main benefit of this id...
CPU utilization is an important aspect of distributed and grid computing environment. The computi... more CPU utilization is an important aspect of distributed and grid computing environment. The computing nodes can be overloaded, i.e., they can have more jobs than their capacity such that no more jobs can be associated to them and in that case, the load from the overloaded node can be shifted to other nodes those are under loaded(i.e. doing little work or sitting idle). For this, load balancing is required. In load balancing the workload is redistributed among the computing nodes of the system. This improves the job response time and CPU utilization. Dynamic load balancing schemes operate on the decisions that based on the current state of the system. They do not require the previous state of the system for making the load balancing decisions. In this paper, we present an analytical comparison of the various dynamic load balancing schemes in distributed and grid computing environment. This comparison depicts which scheme is better in distributed environment and which is better in grid ...
International Journal of Innovative Technology and Exploring Engineering, 2020
This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art ... more This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating p...
2013 3rd IEEE International Advance Computing Conference (IACC), 2013
This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS ... more This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS master slave parallel task allocating algorithm using RR scheduling. This parallel task-allocation is implemented on master-slave system. Task-allocation on slave processors is already presented using FCFS scheduling. Improved master-slave parallel task allocating is also presented that Task groups are arranged in descending order on the basis of their cost and then using FCFS scheduling for arranging in a queue and then task groups are assigned to slave-processors. This paper presents the ZVRS master-slave parallel task-allocating algorithm using RR scheduling. Here Firstly task groups are arranged in descending order on the basis of their costs. Then these task groups are arranged in a queue using RR scheduling. After that, the master processor assigns the task groups to slave processors. This new algorithm shows the advantage that this system consumes the less time, better processor utilization than the previous algorithms. It also improves the efficiency.
Abstract Although intensive work has been done in the area of load balancing, the measure of succ... more Abstract Although intensive work has been done in the area of load balancing, the measure of success of load balancing is the net execution time achieved by applying the load balancing algorithms. This paper deals with the problem of load balancing conditions of ...
Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues gr... more Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues grabbing headlines as one of the biggest technology trends gains momentum. Cloud provides lots of benefits in the fields of Businesses and individuals, like high performance, convenience and cost savings, so they increasingly choose cloud services to stay competitive. Our main focus is on the capabilities of the cloud due to which we forget about the environmental impacts derived from the technology. In the business world Cloud computing contributes to the green computing movement through energy and resource efficiency. As energy costs are increasing while availability dwindles, so it is required to focus on optimisation of energy efficiency while maintaining high service level performance rather than optimising data centre resource management. This paper highlights the impact of Green Cloud computing model that accomplishes not just efficient processing and utilisation of computing infras...
2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), 2021
Cloud is most recognized environment of computing due to the widely availability of its services ... more Cloud is most recognized environment of computing due to the widely availability of its services to the users. Cloud data centers have adopted virtualization of resources technically known as virtual machines (VMs) for the effective and efficient computation of services. But power consumption through the cloud data centers (CDCs) highlighted as one of the major issues responsible to reduce the quality of services of Cloud computing. Because a large number of electronic resources like processing nodes, servers, disk storage, and networking nodes (i.e. switch/routers) in CDC consume a high degree of energy during computation. Energy is also consumed by cooling plants to cool the data centers as it produces a huge amount of heat (CO2) surrounding it. Due to the high energy consumption, the cloud providers pay a high cost of energy computation and it also contributes more carbon emissions to the atmosphere. Reducing energy consumption and CO2 emission is a major challenge in turning the computation from cloud computing to green computing. So, our main focus in this paper is to develop an energy-saving approach for cloud computing based on resource management. Due to variations in the request of services by the users, the demand for resources also varies during the computation. The amount of users’ request are not always same for all 24 hours in a day. Some period of time it is very low and at that period of time, many resources become idle and consume some fixed amount of energy that is a wastage of energy. In this manner, the proposed ESACR approach first define active (required) and idle (non-required) resources as per the users’ request for services at any instance; after that, it converts the idle resources into an energy-saving mode (i.e., turn OFF) unless it is not required for the service of users’ request. So that the saving of energy will increased, if at any instance all non-useable resources is define in OFF mode.
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
Scheduling of jobs and resources are two essential features of the Grid computing infrastructure.... more Scheduling of jobs and resources are two essential features of the Grid computing infrastructure. To improve the global throughput of these environments, an effective and efficient task allocation algorithm is fundamentally required. However for Grid computation we have some challenges like heterogeneity, scalability and adaptability. So to handle these challenges we used a dynamic hierarchical model to represent the architecture of the grid computing system in order to manage jobs and resources. This model was characterized asit defines the hierarchal structure of physical and virtual computing elements and it also supports heterogeneity and scalability of computing elements. In this paper, we proposed a DRR (Dynamic Round Robin) job scheduling algorithm for grid computing system with preemptable jobs allocation (Round Robin) using dynamic time quantum according to the priority of the jobs, which is to be change with every round of execution of the jobs. The main benefit of this id...
CPU utilization is an important aspect of distributed and grid computing environment. The computi... more CPU utilization is an important aspect of distributed and grid computing environment. The computing nodes can be overloaded, i.e., they can have more jobs than their capacity such that no more jobs can be associated to them and in that case, the load from the overloaded node can be shifted to other nodes those are under loaded(i.e. doing little work or sitting idle). For this, load balancing is required. In load balancing the workload is redistributed among the computing nodes of the system. This improves the job response time and CPU utilization. Dynamic load balancing schemes operate on the decisions that based on the current state of the system. They do not require the previous state of the system for making the load balancing decisions. In this paper, we present an analytical comparison of the various dynamic load balancing schemes in distributed and grid computing environment. This comparison depicts which scheme is better in distributed environment and which is better in grid ...
International Journal of Innovative Technology and Exploring Engineering, 2020
This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art ... more This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating p...
2013 3rd IEEE International Advance Computing Conference (IACC), 2013
This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS ... more This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS master slave parallel task allocating algorithm using RR scheduling. This parallel task-allocation is implemented on master-slave system. Task-allocation on slave processors is already presented using FCFS scheduling. Improved master-slave parallel task allocating is also presented that Task groups are arranged in descending order on the basis of their cost and then using FCFS scheduling for arranging in a queue and then task groups are assigned to slave-processors. This paper presents the ZVRS master-slave parallel task-allocating algorithm using RR scheduling. Here Firstly task groups are arranged in descending order on the basis of their costs. Then these task groups are arranged in a queue using RR scheduling. After that, the master processor assigns the task groups to slave processors. This new algorithm shows the advantage that this system consumes the less time, better processor utilization than the previous algorithms. It also improves the efficiency.
Abstract Although intensive work has been done in the area of load balancing, the measure of succ... more Abstract Although intensive work has been done in the area of load balancing, the measure of success of load balancing is the net execution time achieved by applying the load balancing algorithms. This paper deals with the problem of load balancing conditions of ...
Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues gr... more Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues grabbing headlines as one of the biggest technology trends gains momentum. Cloud provides lots of benefits in the fields of Businesses and individuals, like high performance, convenience and cost savings, so they increasingly choose cloud services to stay competitive. Our main focus is on the capabilities of the cloud due to which we forget about the environmental impacts derived from the technology. In the business world Cloud computing contributes to the green computing movement through energy and resource efficiency. As energy costs are increasing while availability dwindles, so it is required to focus on optimisation of energy efficiency while maintaining high service level performance rather than optimising data centre resource management. This paper highlights the impact of Green Cloud computing model that accomplishes not just efficient processing and utilisation of computing infras...
Uploads
Papers by Dr. Shailesh Saxena