Figure 1 - uploaded by Iñigo Goiri
Content may be subject to copyright.
Simulator power consumption validation.  

Simulator power consumption validation.  

Source publication
Conference Paper
Full-text available
As long as virtualization has been introduced in data centers, it has been opening new chances for resource management. Now, it is not just used as a tool for consolidating underused nodes and save power, it also allows new solutions to well-known challenges, such as fault tolerance or heterogeneity management. Virtualization helps to encapsulate W...

Context in source publication

Context 1
... the instantaneous error, it has less than 6.23 W of absolute error which represents relative error of 0.02%. Figure 1 shows this validation by comparing the measured consumption and the simulated one. ...

Similar publications

Conference Paper
Full-text available
The increasing cost in power consumption in data centers, and the corresponding environmental threats have raised a growing demand in energy-efficient computing. Despite its importance, little work was done on introducing models to manage the consumption efficiently. With the growing use of Cloud Computing, this issue becomes very crucial. In a Clo...
Conference Paper
Full-text available
High Performance Computing (HPC) data centers are becoming increasingly dense; the associated power-density and energy consumption of their operation is increasing. Up to half of the total energy is attributed to cooling the data center; greening the data center operations to reduce both computing and cooling energy is imperative. To this effect: i...
Article
Full-text available
In this paper, we present an experimental study of job scheduling algorithms in infrastructure as a service type in clouds. We analyze different system service levels which are distinguished by the amount of computing power a customer is guaranteed to receive within a time frame and a price for a processing time unit. We analyze different scenarios...
Article
Full-text available
Abstract The use of cloud computing data centers is growing rapidly to meet the tremendous increase in demand for high-performance computing (HPC), storage and networking resources for business and scientific applications. Virtual machine (VM) consolidation involves the live migration of VMs to run on fewer physical servers, and thus allowing more...
Preprint
Full-text available
OpenMP is the de facto API for parallel programming in HPC applications. These programs are often computed in data centers, where energy consumption is a major issue. Whereas previous work has focused almost entirely on performance, we here analyse aspects of OpenMP from an energy consumption perspective. This analysis is accomplished by executing...

Citations

... Although the SLLC of a CMP is accessible to all cores, its large aggregate capacity alone cannot guarantee optimal performance without good management strategies. This is especially true when the cores are running a heterogeneous mix of threads that have diverse requirements for SLLC resources, which becomes increasingly common with the widespread deployment of CMPs in complex applications , such as cloud computing [1]. Because of their vital role in minimizing the expensive memory traffic, SLLC capacity management schemes have been extensively studied for a long time by the research community. ...
... But a large aggregate capacity alone does not guarantee optimal performance without an effective SLLC management strategy. This is especially true when the cores are running a heterogeneous mix of applications/threads, as is increasingly common with the widespread deployment of CMPs in complex application environments such as virtual machines and cloud computing [2]. ...
Article
Full-text available
Most chip-multiprocessors nowadays adopt a large shared last-level cache (SLLC). This paper is motivated by our analysis and evaluation of state-of-the-art cache management proposals which reveal a common weakness. That is, the existing alternative replacement policies and cache partitioning schemes, targeted at optimizing either locality or utility of co-scheduled threads, cannot deliver consistently the best performance under a variety of workloads. Therefore, we propose a novel adaptive scheme, called CLU, to interactively co-optimize the locality and utility of co-scheduled threads in thread-aware SLLC capacity management. CLU employs lightweight monitors to dynamically profile the LRU (least recently used) and BIP (bimodal insertion policy) hit curves of individual threads on runtime, enabling the scheme to co-optimize the locality and utility of concurrent threads and thus adapt to more diverse workloads than the existing approaches. We provide results from extensive execution-driven simulation experiments to demonstrate the feasibility and efficacy of CLU over the existing approaches (TADIP, NUCACHE, TA-DRRIP, UCP, and PIPP).
... In order to obtain variable energy efficiency assessments in the different nodes, four VMs were created, one in each node, with the characteristics described in Table 8. These VMs executed a script which made use of the stress workload generator [35] to increase the VM's CPU utilization from 0% to 100% in steps of 100 VCPU number . Therefore, 16 different CPU utilizations were incurred in TestECO, 16 in TestECO2, 8 in TestECO3 and 4 in TestECO4. ...
Article
The monitoring of QoS parameters in Services Computing as well as in Clouds has been a functionality provided by all contemporary systems. As the optimization of energy consumption becomes a major concern for system designers and administrators, it can be considered as another QoS metric to be monitored. In this paper, we present a service framework that allows us to monitor the energy consumption of a Cloud infrastructure, calculate its energy efficiency, and evaluate the gathered data in order to put in place an effective virtual machine (VM) management. In that context, a simulation scenario of an eco-driven VM placement policy resulted in a 14% improvement of the infrastructure's energy efficiency. In total, the proposed approaches and implementations have been validated against a testbed, producing very promising results regarding the prospect of energy efficiency as an important quality factor in Clouds.
... They generally support very simple metrics based on resource availability. There are however proposals to support fine-grain resource-level QoS guarantees on Cloud SLAs (Goiri et al. 2010). The problem is that different providers can support different SLAs. ...
... The presented policy is tested on top of a provider able to operate within a Cloud federation. As it is difficult to build such testbed, we use a simulator (Goiri et al. 2010), where we have configured a provider with 100 nodes. However, as shown in Fig. 11, sometimes the customers' demand is higher. ...
Article
Full-text available
Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers dur- ing peak hours. On the other hand, valley hours in- cur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows providers to dynamically outsource resources to others in response to demand variations. Furthermore, it al- lows providers with underused resources to rent them to other providers. Both techniques make the provider getting more profit when used adequately. Federation of Cloud providers requires having a clear understand- ing of the consequences of each decision. In this paper, we present a characterization of providers operating in a federated Cloud which helps to choose the most convenient decision depending on the environment conditions. These include when to outsource to other providers, rent free resources to other providers (i.e., insourcing), or turn off unused nodes to save power. We characterize these decisions as a function of several parameters and implement a federated provider that
... There are many proposed definitions of cloud computing due to its growing popularity defining its characteristics. A Cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers [1]. ...
Article
Cloud computing is a recent innovation, which provides various services on a usage based payment model. The rapid expansion in data centers has triggered the dramatic increase in energy used, operational cost and its effect on the environment in terms of carbon footprints. To reduce power consumption, it is necessary to consolidate the hosting workloads. In this paper, we present a Single Threshold technique for efficient consolidation of heterogeneous workloads. Our technique focuses on the energy consumption of the data center due to the heterogeneity of the workloads and also gives information about the SLA violations. The experimental results demonstrate that our technique is efficient for the data centers to consolidate the heterogeneous workloads.
... Although the SLLC of a CMP is accessible to all cores, its large aggregate capacity alone cannot guarantee optimal performance without good management strategies. This is especially true when the cores are running a heterogeneous mix of threads that have diverse requirements for SLLC resources, which becomes increasingly common with the widespread deployment of CMPs in complex applications, such as cloud computing [1]. ...
Conference Paper
Full-text available
Shared last-level caches (SLLCs) on chip-multiprocessors play an important role in bridging the performance gap between processing cores and main memory. Although there are already many proposals targeted at overcoming the weaknesses of the least-recently-used (LRU) replacement policy by optimizing either locality or utility for heterogeneous workloads, very few of them are suitable for practical SLLC designs due to their large overhead of log associativity bits per cache line for re-reference interval prediction. The two recently proposed practical replacement policies, TA-DRRIP and SHiP, have significantly reduced the overhead by relying on just 2 bits per line for prediction, but they are oriented towards managing locality only, missing the opportunity provided by utility optimization. This paper is motivated by our two key experimental observations: (i) the not-recently-used (NRU) replacement policy that entails only one bit per line for prediction can satisfactorily approximate the LRU performance; (ii) since locality and utility optimization opportunities are concurrently present in heterogeneous workloads, the co-optimization of both would be indispensable to higher performance but is missing in existing practical SLLC schemes. Therefore, we propose a novel practical SLLC design, called COOP, which needs just one bit per line for re-reference interval prediction, and leverages lightweight per-core locality & utility monitors that profile sample SLLC sets to guide the co-optimization. COOP offers significant throughput improvement over LRU by 7.67% on a quad-core CMP with a 4MB SLLC for 200 random workloads, outperforming both of the recent practical replacement policies at the in-between cost of 17.74KB storage overhead (TA-DRRIP: 4.53% performance improvement with 16KB storage cost; SHiP: 6.00% performance improvement with 25.75KB storage overhead).
... Power management techniques [4]- [6] control the average and/or peak power dissipation in datacenters in a distributed or centralized manner. VM management techniques [7]- [11] control the VM placement in physical servers as well as VM migration from a server to another one. In this paper, we focus on the SLA-based VM management to minimize the operational cost in a cloud computing system. ...
Article
Full-text available
Cloud computing systems (or hosting datacenters) have attracted a lot of attention in recent years. Utility computing, reliable data storage, and infrastructure-independent computing are example applications of such systems. Electrical energy cost of a cloud computing system is a strong function of the consolidation and migration techniques used to assign incoming clients to existing servers. Moreover, each client typically has a service level agreement (SLA), which specifies constraints on performance and/or quality of service that it receives from the system. These constraints result in a basic trade-off between the total energy cost and client satisfaction in the system. In this paper, a resource allocation problem is considered that aims to minimize the total energy cost of cloud computing system while meeting the specified client-level SLAs in a probabilistic sense. The cloud computing system pays penalty for the percentage of a client's requests that do not meet a specified upper bound on their service time. An efficient heuristic algorithm based on convex optimization and dynamic programming is presented to solve the aforesaid resource allocation problem. Simulation results demonstrate the effectiveness of the proposed algorithm compared to previous work.
... In this paper, we propose a new approach for managing virtualized data centers which considers multiple facets when placing VMs in data center nodes and maximizes the provider's profit. This approach extends our previous work [7,8], where we proposed a basic scheduling policy aware of virtualization and we first introduced several facets to be considered synergistically to manage data centers. In our approach, the final profit for the provider is taken into account to take all the placement decisions. ...
Article
As long as virtualization has been introduced in data centers, it has been opening new chances for resource management. Nowadays, it is not just used as a tool for consolidating underused nodes and save power; it also allows new solutions to well-known challenges, such as heterogeneity management. Virtualization helps to encapsulate Web-based applications or HPC jobs in virtual machines (VMs) and see them as a single entity which can be managed in an easier and more efficient way. We propose a new scheduling policy that models and manages a virtualized data center. It focuses on the allocation of VMs in data center nodes according to multiple facets to optimize the provider's profit. In particular, it considers energy efficiency, virtualization overheads, and SLA violation penalties, and supports the outsourcing to external providers. The proposed approach is compared to other common scheduling policies, demonstrating that a provider can improve its benefit by 30% and save power while handling other challenges, such as resource outsourcing, in a better and more intuitive way than other typical approaches do.
Chapter
This chapter introduces Elastic Management of Tasks in Virtualized Environments (EMOTIVE), which is the Barcelona Supercomputing Center (BSC)’s IaaS open-source solution for Cloud Computing. EMOTIVE provides users with elastic fully customized virtual environments in which to execute their applications. Further, it simplifies the development of new middleware services for managing Cloud systems by supporting resource allocation and monitoring, data management, live migration, and checkpoints. These features and its facility to be extended and configured make EMOTIVE especially appropriate to support research on Cloud Computing scenarios. Offering functionality comparable to its commercial counterparts allows EMOTIME to be used on production to set up small Cloud platforms.
Article
Cloud computing enables users to provision resources on demand and execute applications in a way that meets their requirements by choosing virtual resources that fit their application resource needs. Then, it becomes the task of cloud resource providers to accommodate these virtual resources onto physical resources. This problem is a fundamental challenge in cloud computing as resource providers need to map virtual resources onto physical resources in a way that takes into account the providers’ optimization objectives. This article surveys the relevant body of literature that deals with this mapping problem and how it can be addressed in different scenarios and through different objectives and optimization techniques. The evaluation aspects of different solutions are also considered. The article aims at both identifying and classifying research done in the area adopting a categorization that can enhance understanding of the problem.