10 May 2016
San Francesco - Via della Quarquonia 1 (Classroom 1 )
In this talk we discuss challenges when deploying ultra-scale applications on multi-clouds. Currently, computation is shaping as a distributed utility whereby costs for computation depend on temporal factors like distributed power generation, microgrids and deregulated electricity markets. The latter have lead to demand of real-time electricity pricing options where prices change hourly or even every minute. Moreover, due to energy overhead range from 15% to 45% of a data center's power consumption new solutions for cooling of data centers based on outside air economizer technology result in cooling efficiency depending on local weather conditions. In the first part of this talk we discuss the techniques necessary to distribute computation on demand on virtualized geo-distributed data centers considering geo-temporal inputs like time series of electricity prices, outside temperature and similar.
The use of virtualization enables on demand resource provisioning including CPU cores, memory, storage, and network bandwidth. Thus, resources are served to customers under pay per use policy. Usage policies are defined through Service Level Agreements - contracts between providers and consumers including type and quantity of resources. While resource quantity is well defined (e.g., through VM flavours) the QoS is usually limited and only restricted to VM availability. However, VM availability does not say anything about the availability of underlying resources like CPU, nor the impact on the performance to the customers applications. Thus, in the second part of the talk we discuss a metric that is able to isolate the impact of the resources provisioned to cloud users, hence allowing provider to measure the quality of the provided resources and manage them accordingly
relatore:
Brandic , Ivona
Units:
SysMA