In order to support the different new generation equipment and technologies, cloud computing is d... more In order to support the different new generation equipment and technologies, cloud computing is depends to deal with the bulky data. Because a rich amount of data is generated using these devices and processing such big data need the cloud servers which can scale the computational ability according to demand. On the other hands to perform computation we need huge power supply and cooling system that increases the power consumption and emission of harmful gases. Thus, need to achieve green computing by reducing power consumption of computational cloud. In this context, we found VM(virtual machine) workload scheduling can be a good strategy to efficiently utilize the computational resources and reducing power consumption of cloud server. Basically, the physical machines contain a number of virtual machines (VMs). These VMs are used to deal with the workload appeared for processing. If we better utilize the resources then we can process large number of jobs in less amount of VMs. Additionally, we can also turn off the ideal machines to reduce the power consumption. In this context the proposed work is motivated to work with VM scheduling techniques to achieve green computing. In recent literature we also identified that there are two kinds of VM scheduling approaches active and proactive. The proactive technique is more effective as compared to active approaches, due to prior knowledge of the workload on VM. So, in this paper we proposed green cloud predictive model for VMs workload using unsupervised learning (i.e clustering) like K-Mean, K-Medoid, Fuzzy C-Mean (FCM),Self-Organizing Map(SOM) to predict the future workload for VM's scheduling and find the efficient clustering among them for workload prediction in view of green computing. The efficiency of clustering-based prediction is measured on parameters like accuracy, error rate.
2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), 2021
Cloud is most recognized environment of computing due to the widely availability of its services ... more Cloud is most recognized environment of computing due to the widely availability of its services to the users. Cloud data centers have adopted virtualization of resources technically known as virtual machines (VMs) for the effective and efficient computation of services. But power consumption through the cloud data centers (CDCs) highlighted as one of the major issues responsible to reduce the quality of services of Cloud computing. Because a large number of electronic resources like processing nodes, servers, disk storage, and networking nodes (i.e. switch/routers) in CDC consume a high degree of energy during computation. Energy is also consumed by cooling plants to cool the data centers as it produces a huge amount of heat (CO2) surrounding it. Due to the high energy consumption, the cloud providers pay a high cost of energy computation and it also contributes more carbon emissions to the atmosphere. Reducing energy consumption and CO2 emission is a major challenge in turning the computation from cloud computing to green computing. So, our main focus in this paper is to develop an energy-saving approach for cloud computing based on resource management. Due to variations in the request of services by the users, the demand for resources also varies during the computation. The amount of users’ request are not always same for all 24 hours in a day. Some period of time it is very low and at that period of time, many resources become idle and consume some fixed amount of energy that is a wastage of energy. In this manner, the proposed ESACR approach first define active (required) and idle (non-required) resources as per the users’ request for services at any instance; after that, it converts the idle resources into an energy-saving mode (i.e., turn OFF) unless it is not required for the service of users’ request. So that the saving of energy will increased, if at any instance all non-useable resources is define in OFF mode.
International Journal of Mathematical Sciences and Computing, 2021
Cloud computing is a widely acceptable computing environment, and its services are also widely av... more Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO 2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact.
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
Scheduling of jobs and resources are two essential features of the Grid computing infrastructure.... more Scheduling of jobs and resources are two essential features of the Grid computing infrastructure. To improve the global throughput of these environments, an effective and efficient task allocation algorithm is fundamentally required. However for Grid computation we have some challenges like heterogeneity, scalability and adaptability. So to handle these challenges we used a dynamic hierarchical model to represent the architecture of the grid computing system in order to manage jobs and resources. This model was characterized asit defines the hierarchal structure of physical and virtual computing elements and it also supports heterogeneity and scalability of computing elements. In this paper, we proposed a DRR (Dynamic Round Robin) job scheduling algorithm for grid computing system with preemptable jobs allocation (Round Robin) using dynamic time quantum according to the priority of the jobs, which is to be change with every round of execution of the jobs. The main benefit of this id...
CPU utilization is an important aspect of distributed and grid computing environment. The computi... more CPU utilization is an important aspect of distributed and grid computing environment. The computing nodes can be overloaded, i.e., they can have more jobs than their capacity such that no more jobs can be associated to them and in that case, the load from the overloaded node can be shifted to other nodes those are under loaded(i.e. doing little work or sitting idle). For this, load balancing is required. In load balancing the workload is redistributed among the computing nodes of the system. This improves the job response time and CPU utilization. Dynamic load balancing schemes operate on the decisions that based on the current state of the system. They do not require the previous state of the system for making the load balancing decisions. In this paper, we present an analytical comparison of the various dynamic load balancing schemes in distributed and grid computing environment. This comparison depicts which scheme is better in distributed environment and which is better in grid ...
International Journal of Advanced Computer Science and Applications, 2021
The rapid expansion of communication and computational technology provides us the opportunity to ... more The rapid expansion of communication and computational technology provides us the opportunity to deal with the bulk nature of dynamic data. The classical computing style is not much effective for such mission-critical data analysis and processing. Therefore, cloud computing is become popular for addressing and dealing with data. Cloud computing involves a large computational and network infrastructure that requires a significant amount of power and generates carbon footprints (CO 2). In this context, we can minimize the cloud's energy consumption by controlling and switching off ideal machines. Therefore, in this paper, we propose a proactive virtual machine (VM) scheduling technique that can deal with frequent migration of VMs and minimize the energy consumption of the cloud using unsupervised learning techniques. The main objective of the proposed work is to reduce the energy consumption of cloud datacenters through effective utilization of cloud resources by predicting the future demand of resources. In this context four different clustering algorithms, namely K-Means, SOM (Self Organizing Map), FCM (Fuzzy C Means), and K-Mediod are used to develop the proposed proactive VM scheduling and find which type of clustering algorithm is best suitable for reducing the energy uses through proactive VM scheduling. This predictive load-aware VM scheduling technique is evaluated and simulated using the Cloud-Sim simulator. In order to demonstrate the effectiveness of the proposed scheduling technique, the workload trace of 29 days released by Google in 2019 is used. The experimental outcomes are summarized in different performance matrices, such as the energy consumed and the average processing time. Finally, by concluding the efforts made, we also suggest future research directions.
International Journal of Innovative Technology and Exploring Engineering, 2020
This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art ... more This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating p...
2013 3rd IEEE International Advance Computing Conference (IACC), 2013
This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS ... more This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS master slave parallel task allocating algorithm using RR scheduling. This parallel task-allocation is implemented on master-slave system. Task-allocation on slave processors is already presented using FCFS scheduling. Improved master-slave parallel task allocating is also presented that Task groups are arranged in descending order on the basis of their cost and then using FCFS scheduling for arranging in a queue and then task groups are assigned to slave-processors. This paper presents the ZVRS master-slave parallel task-allocating algorithm using RR scheduling. Here Firstly task groups are arranged in descending order on the basis of their costs. Then these task groups are arranged in a queue using RR scheduling. After that, the master processor assigns the task groups to slave processors. This new algorithm shows the advantage that this system consumes the less time, better processor utilization than the previous algorithms. It also improves the efficiency.
Abstract Although intensive work has been done in the area of load balancing, the measure of succ... more Abstract Although intensive work has been done in the area of load balancing, the measure of success of load balancing is the net execution time achieved by applying the load balancing algorithms. This paper deals with the problem of load balancing conditions of ...
Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues gr... more Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues grabbing headlines as one of the biggest technology trends gains momentum. Cloud provides lots of benefits in the fields of Businesses and individuals, like high performance, convenience and cost savings, so they increasingly choose cloud services to stay competitive. Our main focus is on the capabilities of the cloud due to which we forget about the environmental impacts derived from the technology. In the business world Cloud computing contributes to the green computing movement through energy and resource efficiency. As energy costs are increasing while availability dwindles, so it is required to focus on optimisation of energy efficiency while maintaining high service level performance rather than optimising data centre resource management. This paper highlights the impact of Green Cloud computing model that accomplishes not just efficient processing and utilisation of computing infras...
International Journal of Innovative Research in Engineering & Management
In terms of growing awareness about environmental impact of computing, green technology is gainin... more In terms of growing awareness about environmental impact of computing, green technology is gaining increasing importance. Green computing refers to the practice of environmentally responsible and efficient use of computing resources while maintaining economic viability and improving its performance in eco-friendly way. Green computing is an effective study in which disposing, recycling and manufacturing of computers and electronic devices is taken into consideration. The goal of green computing is to lower down the use of hazardous materials, maximize energy efficiency and popularize biodegradability or recyclability of outdated products and factory waste. Cloud computing becomes a powerful trend in the development of ICT(Information and Communication Technologies) services. Demand on the cloud computing is continually growth that makes it changes to scope of green cloud computing. It aims to reduce energy consumption in Cloud computing while maintaining a better performance. We need green cloud computing solutions that can not only save energy, but also reduce operational costs. An architectural framework and principles that provides efficient green enhancements within a scalable Cloud computing architecture with resource provisioning and allocation algorithm for energy efficient management of cloud computing environments to improve energy efficiency of the data centre.In this paper we focus on analysis of computing in green environment.
International Journal of Computer Applications, 2015
Application areas of multi cluster grid are increases day by day because of the seamless and scal... more Application areas of multi cluster grid are increases day by day because of the seamless and scalable access to wide-area distributed resources in grid environment. Multi cluster grid allows sharing, selection and aggregates of geographically resources over heterogeneous and distributed locations. But some issues arise in multi cluster grid due to its environment. Scheduling a job on the most suitable computational resource of the grid is one of the most important issues in grid environment. To handle this issue a new approach of scheduling is proposed in this paper which is based on priority of completion deadline of the jobs because some time the job execution have an importance only when it complete under the deadline define by the user on the basis of working circumstances as a real time scenario, which helps scheduling of jobs on computational environment of multi cluster grid in very effective manner.
In order to support the different new generation equipment and technologies, cloud computing is d... more In order to support the different new generation equipment and technologies, cloud computing is depends to deal with the bulky data. Because a rich amount of data is generated using these devices and processing such big data need the cloud servers which can scale the computational ability according to demand. On the other hands to perform computation we need huge power supply and cooling system that increases the power consumption and emission of harmful gases. Thus, need to achieve green computing by reducing power consumption of computational cloud. In this context, we found VM(virtual machine) workload scheduling can be a good strategy to efficiently utilize the computational resources and reducing power consumption of cloud server. Basically, the physical machines contain a number of virtual machines (VMs). These VMs are used to deal with the workload appeared for processing. If we better utilize the resources then we can process large number of jobs in less amount of VMs. Additionally, we can also turn off the ideal machines to reduce the power consumption. In this context the proposed work is motivated to work with VM scheduling techniques to achieve green computing. In recent literature we also identified that there are two kinds of VM scheduling approaches active and proactive. The proactive technique is more effective as compared to active approaches, due to prior knowledge of the workload on VM. So, in this paper we proposed green cloud predictive model for VMs workload using unsupervised learning (i.e clustering) like K-Mean, K-Medoid, Fuzzy C-Mean (FCM),Self-Organizing Map(SOM) to predict the future workload for VM's scheduling and find the efficient clustering among them for workload prediction in view of green computing. The efficiency of clustering-based prediction is measured on parameters like accuracy, error rate.
2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), 2021
Cloud is most recognized environment of computing due to the widely availability of its services ... more Cloud is most recognized environment of computing due to the widely availability of its services to the users. Cloud data centers have adopted virtualization of resources technically known as virtual machines (VMs) for the effective and efficient computation of services. But power consumption through the cloud data centers (CDCs) highlighted as one of the major issues responsible to reduce the quality of services of Cloud computing. Because a large number of electronic resources like processing nodes, servers, disk storage, and networking nodes (i.e. switch/routers) in CDC consume a high degree of energy during computation. Energy is also consumed by cooling plants to cool the data centers as it produces a huge amount of heat (CO2) surrounding it. Due to the high energy consumption, the cloud providers pay a high cost of energy computation and it also contributes more carbon emissions to the atmosphere. Reducing energy consumption and CO2 emission is a major challenge in turning the computation from cloud computing to green computing. So, our main focus in this paper is to develop an energy-saving approach for cloud computing based on resource management. Due to variations in the request of services by the users, the demand for resources also varies during the computation. The amount of users’ request are not always same for all 24 hours in a day. Some period of time it is very low and at that period of time, many resources become idle and consume some fixed amount of energy that is a wastage of energy. In this manner, the proposed ESACR approach first define active (required) and idle (non-required) resources as per the users’ request for services at any instance; after that, it converts the idle resources into an energy-saving mode (i.e., turn OFF) unless it is not required for the service of users’ request. So that the saving of energy will increased, if at any instance all non-useable resources is define in OFF mode.
International Journal of Mathematical Sciences and Computing, 2021
Cloud computing is a widely acceptable computing environment, and its services are also widely av... more Cloud computing is a widely acceptable computing environment, and its services are also widely available. But the consumption of energy is one of the major issues of cloud computing as a green computing. Because many electronic resources like processing devices, storage devices in both client and server site and network computing devices like switches, routers are the main elements of energy consumption in cloud and during computation power are also required to cool the IT load in cloud computing. So due to the high consumption, cloud resources define the high energy cost during the service activities of cloud computing and contribute more carbon emissions to the atmosphere. These two issues inspired the cloud companies to develop such renewable cloud sustainability regulations to control the energy cost and the rate of CO 2 emission. The main purpose of this paper is to develop a green computing environment through saving the energy of cloud resources using the specific approach of identifying the requirement of computing resources during the computation of cloud services. Only required computing resources remain ON (working state), and the rest become OFF (sleep/hibernate state) to reduce the energy uses in the cloud data centers. This approach will be more efficient than other available approaches based on cloud service scheduling or migration and virtualization of services in the cloud network. It reduces the cloud data center's energy usages by applying a power management scheme (ON/OFF) on computing resources. The proposed approach helps to convert the cloud computing in green computing through identifying an appropriate number of cloud computing resources like processing nodes, servers, disks and switches/routers during any service computation on cloud to handle the energy-saving or environmental impact.
2019 8th International Conference System Modeling and Advancement in Research Trends (SMART), 2019
In the current scenario the demand for high performance computing system increases day by day to ... more In the current scenario the demand for high performance computing system increases day by day to achieve maximum computation in minimum time. Rapid growth of Internet or Internet based services, increased the interest in network based computing or on-demand computing systems like cloud computing system. High computing servers are being deployed in large quantity for cloud computing in form of data Centers through which many different services on internet are provide to the cloud users in a very smooth and efficient manner. A large distributed system is described as a data center that includes a huge quantity of computing servers connected by an efficient network. So the consumption of energy in such data centers is enormously very high. Not only the maintenance of the data centers are too exorbitant, but also socially very harmful. High vitality costs and immense carbon footprints are brought in these data centers because the servers needed a substantial amount of electricity for their computation as well as for their cooling. As cost of energy increases and availability decreases, focus should be shifted towards the optimization of data centre servers for best performance alone with the policies of less energy consumption to justify the level of service performance with social impact. So in this paper we proposed energy aware consolidation technique for cloud data centers based on prediction of future client's requests to increase the utilization of computing servers as per request of users/clients which associated some demand of cloud resources for maintain the power consumption in cloud.
Scheduling of jobs and resources are two essential features of the Grid computing infrastructure.... more Scheduling of jobs and resources are two essential features of the Grid computing infrastructure. To improve the global throughput of these environments, an effective and efficient task allocation algorithm is fundamentally required. However for Grid computation we have some challenges like heterogeneity, scalability and adaptability. So to handle these challenges we used a dynamic hierarchical model to represent the architecture of the grid computing system in order to manage jobs and resources. This model was characterized asit defines the hierarchal structure of physical and virtual computing elements and it also supports heterogeneity and scalability of computing elements. In this paper, we proposed a DRR (Dynamic Round Robin) job scheduling algorithm for grid computing system with preemptable jobs allocation (Round Robin) using dynamic time quantum according to the priority of the jobs, which is to be change with every round of execution of the jobs. The main benefit of this id...
CPU utilization is an important aspect of distributed and grid computing environment. The computi... more CPU utilization is an important aspect of distributed and grid computing environment. The computing nodes can be overloaded, i.e., they can have more jobs than their capacity such that no more jobs can be associated to them and in that case, the load from the overloaded node can be shifted to other nodes those are under loaded(i.e. doing little work or sitting idle). For this, load balancing is required. In load balancing the workload is redistributed among the computing nodes of the system. This improves the job response time and CPU utilization. Dynamic load balancing schemes operate on the decisions that based on the current state of the system. They do not require the previous state of the system for making the load balancing decisions. In this paper, we present an analytical comparison of the various dynamic load balancing schemes in distributed and grid computing environment. This comparison depicts which scheme is better in distributed environment and which is better in grid ...
International Journal of Advanced Computer Science and Applications, 2021
The rapid expansion of communication and computational technology provides us the opportunity to ... more The rapid expansion of communication and computational technology provides us the opportunity to deal with the bulk nature of dynamic data. The classical computing style is not much effective for such mission-critical data analysis and processing. Therefore, cloud computing is become popular for addressing and dealing with data. Cloud computing involves a large computational and network infrastructure that requires a significant amount of power and generates carbon footprints (CO 2). In this context, we can minimize the cloud's energy consumption by controlling and switching off ideal machines. Therefore, in this paper, we propose a proactive virtual machine (VM) scheduling technique that can deal with frequent migration of VMs and minimize the energy consumption of the cloud using unsupervised learning techniques. The main objective of the proposed work is to reduce the energy consumption of cloud datacenters through effective utilization of cloud resources by predicting the future demand of resources. In this context four different clustering algorithms, namely K-Means, SOM (Self Organizing Map), FCM (Fuzzy C Means), and K-Mediod are used to develop the proposed proactive VM scheduling and find which type of clustering algorithm is best suitable for reducing the energy uses through proactive VM scheduling. This predictive load-aware VM scheduling technique is evaluated and simulated using the Cloud-Sim simulator. In order to demonstrate the effectiveness of the proposed scheduling technique, the workload trace of 29 days released by Google in 2019 is used. The experimental outcomes are summarized in different performance matrices, such as the energy consumed and the average processing time. Finally, by concluding the efforts made, we also suggest future research directions.
International Journal of Innovative Technology and Exploring Engineering, 2020
This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art ... more This publication discusses high-performance energyaware cloud (HPEAC) computing state-of-the-art strategies to acknowledgement and categorization of systems and devices, optimization methodologies, and energy / power control techniques in particular. System types involve single machines, clusters, networks, and clouds, while CPUs, GPUs, multiprocessors, and hybrid systems are known to be device types. Objective of Optimization incorporates multiple calculation blends, such as “execution time”, “consumption of energy”& “temperature” with the consideration of limiting power/energy consumption. Control measures usually involve scheduling policies, frequency based policies (DVFS, DFS, DCT), programmatic API’s for limiting the power consumptions (such as” Intel- RAPL”,” NVIDIA- NVML”), standardization of applications, and hybrid techniques. We address energy / power management software and APIs as well as methods and conditions in modern HPEACC systems for forecasting and/or simulating p...
2013 3rd IEEE International Advance Computing Conference (IACC), 2013
This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS ... more This paper describes a new heuristic algorithm for allocating n-tasks on p-processors named ZVRS master slave parallel task allocating algorithm using RR scheduling. This parallel task-allocation is implemented on master-slave system. Task-allocation on slave processors is already presented using FCFS scheduling. Improved master-slave parallel task allocating is also presented that Task groups are arranged in descending order on the basis of their cost and then using FCFS scheduling for arranging in a queue and then task groups are assigned to slave-processors. This paper presents the ZVRS master-slave parallel task-allocating algorithm using RR scheduling. Here Firstly task groups are arranged in descending order on the basis of their costs. Then these task groups are arranged in a queue using RR scheduling. After that, the master processor assigns the task groups to slave processors. This new algorithm shows the advantage that this system consumes the less time, better processor utilization than the previous algorithms. It also improves the efficiency.
Abstract Although intensive work has been done in the area of load balancing, the measure of succ... more Abstract Although intensive work has been done in the area of load balancing, the measure of success of load balancing is the net execution time achieved by applying the load balancing algorithms. This paper deals with the problem of load balancing conditions of ...
Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues gr... more Cloud computing is the Buzzword in the ICT sector for high performance computing. It continues grabbing headlines as one of the biggest technology trends gains momentum. Cloud provides lots of benefits in the fields of Businesses and individuals, like high performance, convenience and cost savings, so they increasingly choose cloud services to stay competitive. Our main focus is on the capabilities of the cloud due to which we forget about the environmental impacts derived from the technology. In the business world Cloud computing contributes to the green computing movement through energy and resource efficiency. As energy costs are increasing while availability dwindles, so it is required to focus on optimisation of energy efficiency while maintaining high service level performance rather than optimising data centre resource management. This paper highlights the impact of Green Cloud computing model that accomplishes not just efficient processing and utilisation of computing infras...
International Journal of Innovative Research in Engineering & Management
In terms of growing awareness about environmental impact of computing, green technology is gainin... more In terms of growing awareness about environmental impact of computing, green technology is gaining increasing importance. Green computing refers to the practice of environmentally responsible and efficient use of computing resources while maintaining economic viability and improving its performance in eco-friendly way. Green computing is an effective study in which disposing, recycling and manufacturing of computers and electronic devices is taken into consideration. The goal of green computing is to lower down the use of hazardous materials, maximize energy efficiency and popularize biodegradability or recyclability of outdated products and factory waste. Cloud computing becomes a powerful trend in the development of ICT(Information and Communication Technologies) services. Demand on the cloud computing is continually growth that makes it changes to scope of green cloud computing. It aims to reduce energy consumption in Cloud computing while maintaining a better performance. We need green cloud computing solutions that can not only save energy, but also reduce operational costs. An architectural framework and principles that provides efficient green enhancements within a scalable Cloud computing architecture with resource provisioning and allocation algorithm for energy efficient management of cloud computing environments to improve energy efficiency of the data centre.In this paper we focus on analysis of computing in green environment.
International Journal of Computer Applications, 2015
Application areas of multi cluster grid are increases day by day because of the seamless and scal... more Application areas of multi cluster grid are increases day by day because of the seamless and scalable access to wide-area distributed resources in grid environment. Multi cluster grid allows sharing, selection and aggregates of geographically resources over heterogeneous and distributed locations. But some issues arise in multi cluster grid due to its environment. Scheduling a job on the most suitable computational resource of the grid is one of the most important issues in grid environment. To handle this issue a new approach of scheduling is proposed in this paper which is based on priority of completion deadline of the jobs because some time the job execution have an importance only when it complete under the deadline define by the user on the basis of working circumstances as a real time scenario, which helps scheduling of jobs on computational environment of multi cluster grid in very effective manner.
Uploads
Papers by Dr. Shailesh Saxena