World Cup 2014: did you recorded a drop in the network energy use?

Netherlands_Spain2014Large events can have a serious impact on IT infrastructure. A great example of this is the current football World Cup 2014.

RIPE NCC, one of five Regional Internet Registries (RIRs), does a great job with collecting data of the network traffic during the event. In collaboration with the European Internet Exchange Association (Euro-IX) they follow the Internet Exchange Point (IXP) traffic during the championship.

Two examples from their research. The following graphs shows in red the traffic of the match day and the same day of the week in the weeks before as grey lines. The game times are indicated as grey rectangles in the background.

 The Netherlands – Spain

The traffic volume during the Netherlands-Spain game differs 256 Terabytes to the week before!


Traffic statistics at the Amsterdam Internet Exchange (AMS-IX).


Cameroon – Brazil


Traffic statistics at the PTT Metro IXP in Sao Paulo, Brazil.

These traffic drops makes you wonder how much energy was saved by these games.

More information about the network traffic can be found at RIPE NCC and more information will follow during the World Cup.

Data Center 2.0 – The Sustainable Data Center, Now Available!

Data Center 2.0 – The Sustainable Data Center is now available. Data Center 2.0

The book is showing up on the websites of Amazon and will soon starts to pop up on websites of other  E-tailers’ .

Data Center 2.0 – The Sustainable Data Center is an in-depth look into the steps needed to transform modern-day data centers into sustainable entities.

See the press release:

Some nice endorsements were received:

“Data Center 2.0, is not so much about technology but about people, society and economic development. By helping readers understand that even if Data Centers, enabling the Digital economy, are contributing a lot to energy saving, they need to be sustainable themselves; Rien Dijkstra is on the right track. When explaining how to build sustainable Data Centers, through multi disciplinary approach, breaking the usual silos of the different expertise, Rien Dijkstra is proposing the change of behavior needed to build sustainable Data Centers. Definitely it is about people, not technology.” 

Paul-Francois Cattier, Global Senior Vice-President Data Center – Schneider Electric

“In Data Center 2.0 The Sustainable Data Center author Rien Dijkstra has gone several steps further in viewing the data center from the perspective of long term ownership and efficiency in combination with treating it as a system. It’s an excellent read with many sections that could be extracted and utilized in their own right. I highly recommend this read for IT leaders who are struggling with the questions of whether to add capacity (co-locate, buy, build, or lease) or how to create a stronger organizational ownership model for existing data center capacity. The questions get more complex every year and the risks more serious for the business. The fact that you’re making a business critical decision that must stand the test of technology and business change over 15 years is something you shouldn’t take lightly.” 

Mark Thiele, President and Founder Data Center Pulse

“Data centers used to be buildings to house computer servers along with network and storage systems, a physical manifestation of the Digital Economy. Internet of Things, the digitization of about everything in and around us, brings many profound changes. A data center is the place where it all comes together. Physical and digital life, fueled by energy and IT, economical and social demands and needs and not to forget sustainability considerations. Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy. A data center in that sense is much more than just a building for servers. It has become a new business model. Data center 2.0 is a remarkable book that describes the steps and phases to facilitate and achieve this paradigm.” 

John Post, Managing Director – Foundation Green IT Amsterdam region

Data Center 2.0 – The Sustainable Data Center

Currently busy with the final steps to get the forthcoming book ‘Data Center 2.0 – The Sustainable Data Center’ (ISBN 978-1499224689) published at the beginning of the summer.

Some quotes from the book:

“A data center is a very peculiar and special place. It is the place where different worlds meet each other. A place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT and energy world come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.Data Center 2.0

Given the great potential of Information Technology to transform today’s society into one characterised by sustainability what is the position of data centers?


The data center is the place were it all comes together: energy, IT and societal demands and needs.


A sustainable data center should be environmentally viable, economically equitable, and socially bearable. To become sustainable, the data center industry must free itself from the shackles of 19th century based ideas and concepts of production. They are too simple for our 21th century world.

The combination of service-dominant logic and cradle-to-cradle makes it possible to create a sustainability data center industry.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.”

The book takes a conceptual approach on the subject of data centers and sustainability. It offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

The book has already received endorsements of Paul-Francois Cattier Global Senior, Vice President Data Center of Schneider Electric and John Post, Managing Director of Foundation  Green IT Amsterdam region.

Table of contents

1 Prologue
2 Signs Of The Time
3 Data Centers, 21th Century Factories
4 Data Centers A Critical Infrastructure
5 Data Centers And The IT Supply Chain
6 The Core Processes Of A Data Center
7 Externalities
8 A Look At Data Center Management
9 Data Center Analysis
10 Data Center Monitoring and Control
11 The Willingness To Change
12 On The Move: Data Center 1.5
13 IT Is Transforming Now!
14 Dominant Logic Under Pressure
15 Away From The Dominant Logic
16 A New Industrial Model
17 Data Center 2.0

Datacenters: The Need For A Monitoring Framework

For a proper usage and collaboration between BMS, DCIM, CMDB, etc. the usage of an architectural framework is recommended.


A datacenter is basically a value stack. A supply chain of stack elements where each element is a service component (People, Process and Technology that adds up to an  service). For each element in the stack the IT organization has to assure the quality as agreed on. In essence these quality attributes were performance/capacity, availability/continuity, confidentiality/integrity, and compliance. And nowadays also sustainability. One of the greatest challenges for the IT organization was and is to coherently manage these quality attributes for the complete service stack or supply chain.

Currently a mixture of management systems is used to manage the datacenter service stack: BMS, DCIM, CMDB, and System & Network Management Systems.


As explained in “Datacenters: blending BIM, DCIM, CMDB, etc.” we are still talking about working in silos where each of the participants that is involved in the life cycle of the Datacenter is using its own information sets and systems. To achieve real general improvements (instead of local optimizing successes) a better collaboration and information exchange between the different participants is needed.


To steer and control the datacenter usage successfully a monitoring system should be in place to get this done. Accepting the fact that the participants are using different systems we have to find a way to improve the collaboration and information exchange between the systems. There for we need some kind of reference, an architectural framework.

For designing an efficient monitoring framework, it is important to assemble a coherent system of functional building blocks or service components. Loose coupling and strong cohesion, encapsulation and the use of Facade and Model–View–Controller (MVC) patterns is strongly wanted because of the many proprietary solutions that are involved.


Based on an earlier blog about energy monitoring a short description of the most common building blocks will be given:

  • Most vendors have their own proprietary API’s  to interface with the metering devices. Because metering differ within and between data centers these differences should be encapsulated in standard ‘Facility usage services‘. Services for the primary, secondary and tertiary power supply and usage, the cooling, the air handling.
  • For the IT infrastructure (servers, storage and network components) usage we got the same kind of issues. So the same receipt, encapsulation of proprietary API’s in standard ‘IT usage services‘, must be used.
  • Environmental conditions outside the data center, the weather, has its influences on the the data center so proper information about this must be available by a dedicated Outdoor service component.
  • For a specific data center a DC Usage Service Bus must be available to have a common interface for exchanging usage information with reporting systems.
  • The DC Data Store is a repository (Operational Data Store or Dataware House) for datacenter usage data across data centers.
  • The Configuration management database(s) (CMDB) is a repository with the system configuration information of the Facility Infrastructure and the IT infrastructure of the data centers.
  • The Manufactures specification databases stores specifications/claims of components as provided by the manufactures.
  • The IT capacity database stores the available capacity (processing power and storage) size that is available for a certain time frame.
  • The IT workload database stores the workload (processing power and storage) size that must be processed in a certain time frame.
  • The DC Policy Base is a repository with all the policies, rules, targets and thresholds about the datacenter usage.
  • The Enterprise DC Usage Service Bus must be available to have a common interface for exchanging policies, workload capacity, CMDB, manufacturer’s  and usage information of the involved data centers, with reporting systems.
  • The Composite services deliver different views and reports of the energy usage by assembling information from the different basic services by means of the Enterprise Bus.
  • The DC Usage Portal is the presentation layer for the different stakeholders that want to know something about the usage of the Datacenter.

 DC Monitoring Framework


Usage of an architectural framework (reference architecture) is a must to get a monitoring environment working. The modular approach focussed on standard interfaces gives the opportunity of “rip and replace” of components. It also gives the possibility to extend the framework with other service components. The service bus provides a standard exchange of data (based on messages) between the applications and prevents the making of dedicated, proprietary point to point communication channels. Also to get this framework working a standard data model is mandatory.

Datacenters: blending BIM, DCIM, CMDB, etc.

How to manage the life cycle of a datacenter with a rapidly changing environment and where so many stakeholders are involved?


A datacenter is a very special place where three different worlds and groups of people meet – there is the facility group whose focus is on the building, there is the IT infrastructure group focused on the IT equipment housed within it, and there is the IT applications group focused on the applications that runs on the IT equipment. All with different objectives and incentives.

This worked fine when changes were highly predictable and changes came relative slow. But times have changed. Business demands drive the usage of datacenters and these demands have changed; large dynamic data volumes, stringent service-level demands, ever-higher application availability requirements and changing environmental requirements must be accommodated more swiftly then ever.

Business demands and rapidly advancing information technology have led to constant replacement of IT infrastructure. This pace of replacement of IT infrastructure is not in sync with the changes of the site infrastructure. The components of power, cooling, air handling last a longer time (10 years) than IT infrastructure (two to five years). The site infrastructure often ends up being mismatched with the facility demands of the IT infrastructure. While technically feasible, changing site infrastructure in current operational data centers may not always make sense. For some data centers, the cost savings not justify the cost for renewing the site infrastructure. For other data centers, the criticality of their function to the business just prohibits downtime and inhibits facility managers from making major overhauls to realise improvements. This makes it difficult to continually optimise data centers in such a rapidly changing environment.

IT Management

One of the most significant challenges for the IT organisation was and is to coherently manage the quality attributes for the complete IT service stack or IT supply chain (including the facility / site infrastructure).

The IT department already tried to manage the IT environment with System & Network Management Systems and Configuration Management Data Bases (CMDB’s). Where the Facility department is using Building Management Systems (BMS) in monitoring and controlling the equipment in an entire building. Until recently there was a disconnect between the facility and IT infrastructure. To get rid of the limited visibility and control of the physical layer of the data center we see the rise of a new kind of system: the Data Center Infrastructure Management System (DCIM).

But there is still another gap to be bridged. The power and cooling capacity and resources of a data center are already largely set by the original MEP (Mechanical Electrical Plumbing) design and data center location choice. The Facility/MEP design sets an ‘invisible’ boundary for IT infrastructure. And just as in the IT world, in the Facility world  there is knowledge and information loss between the design, build and production/operation phase.

Knowledge Gaps


To solve this issue, the Facility world is using more and more Building Information Model systems (BIM). BIM is a model-centric repository that support the business process of planning, designing, building and maintaining a building. In other words a system to facilitate coordination, communication, analysis and simulation, project management and collaboration, asset management, maintenance and operations throughout the building life cycle.

The transition to a BIM-centric design approach fundamentally changes the Architecture, Engineer, Contractor (AEC) process and workflow by the way project information is shared, coordinated, and reviewed. But it is also extending the workflow by integrating with one of the most important players in the AEC workflow; the operators.

Dynamic information about the building, such as sensor measurements and control signals from the building systems, can be incorporated within BIM to support analysis of building operation and maintenance.

Working in Silos

Although some local improvements, in sharing information, are and can be made with BIM, DCIM, CMDB and System & Network Management Systems we are still talking about working in silos. The different participants that are involved in the life cycle of the Datacenter are using their own information sets and systems. This is a repeating process, from the owner tot the architect to the design team to the construction manager, the contractor to the subcontractors, to the different operators and, ultimately, back to the owner.

Integrated processes and life cycle management

If we want to achieve general improvements during the complete life cycle of the data center based on key performance indicators (KPI) such as Cost, Quality, On-time delivery, Productivity, Availability, and Energy efficiency a better collaboration and information exchange between the different participants is needed.

BIM, BMS, DCIM, CMDB and System & Network Management Systems do have an overlap in scope but also have their own focus: life cycle, static and dynamic status information of facility, IT infrastructure and software components.

Silo Buster

We all know that one size fits all doesn’t work and/or is not flexible enough. So what is needed is collaboration and interoperability, getting rid of the silo approach by focussing on the exchange of information between these different systems. There is a need for modular designed management systems with open API’s so that customers/users can make their own choice on which job is done by which system and still have the opportunity of an easy exchange of information (retrieval or feed).

This will revolutionizes the way Data center information is shared, coordinated, and reviewed and will affect workflows, delivery methods, and deliverables in a possitive way.

Energy efficient Data Center ≠ Green Data Center

GreenDC“Many IT companies are simply  choosing to attach their modern information factories to some of the dirtiest sources of electricity, supplied by some of the dirtiest utilities on the planet” says Greenpeace.

In their latest reportHow green is your cloud?’, Greenpeace has criticized the cloud computing industry saying that cloud providers “are all rapidly expanding without adequate regard to source of electricity, and rely heavily on dirty energy to power their clouds.”

In response to the report, Urs Hoelzle Google’s Senior Vice President for Technical Infrastructure in a statement published in the New York Times said that: The company welcomed the Greenpeace report and believed that it would intensify the industry’s focus on renewable energy.  Where as Apple and Amazon raises questions about the credibility of the estimates in the Greenpeace report, and illustrates the difficulty of seeking to estimate data center power usage

The discussion on how accurate and valid the estimates in the report are is indeed important but the real problem which is addressed shouldn’t be missed and that is; if an energy efficient data center  equals an green data center.

In the data center world the last two years much attention is given to the usage of PUE as a KPI for energy efficiency. PUE has served its goal to get a rough indication of the energy efficiency of a data center but it has its limitations. For example if servers that were not being used are shut down this can lead to a bigger PUE. But the biggest flaw, from a green IT perspective, in using the PUE is that there is no relation to the Carbon emission based on the electricity that is being used.

This CO2 emission flaw was already addressed in 2010 by Eirikur Hrafnsson CEO  of  Greenqloud (see blog Greenqloud propose green PUE ) and later by the Green Grid with a white paper on the Carbon Usage Effectiveness (CUE) metric.

Energy efficiency and carbon emission are mixed-up. You can be very energy inefficient and still have zero carbon emission and vice versa you can be very energy-efficient and still have a large carbon emission. Therefore cloud providers and data center operators must look not only at how efficiently they are using electricity, but also the sources of electricity that they are choosing, a detail that many companies are unwilling to disclose on their own. The energy sources of the power grid isn’t the only energy issue. A lot of data centers are using diesel generators as back up or are using generators because of  sub-standard or non-existent grid connection. The claim of green Cloud Computing services and green data centers can only be proven if the providers are more transparent about their energy sources.

If operators neglect carbon emission, dont care because it are external costs, it is “out of scope” for their company or they feel they are not directly responsible, it will come back to them like a boomerang. Resilience to a changing climate demands for decarburization of the energy sources we are using to ensure sustainability. If carbon emission wont be reduced, the government will use rigorous policy instruments to charge for this external costs.

As rightly stated in the Greenpeace report, to create a  more sustainable data center there are several steps that can be taken:

  • Power purchase agreements for renewable energy; Many operators s are recognizing that their influence and market power give them the opportunity and responsibility to demand clean energy investments. Operators can take charge of their electricity supply chain by signing long-term contracts to buy renewable electricity from a specific source through a utility or renewable energy developer via a power purchase agreement (PPA) to help drive renewable electricity onto the grid.
  • Onsite renewable energy;  Operators can install renewable energy on site to generate power for their own operations. For many facilities however, it may be difficult technically or economically to power a significant portion of a large data center with on-site renewable energy. This of course depends on the scale of the facility and the available renewable resources. However, companies are increasingly exploring onsite investments that can help provide better protection against electricity price volatility and, with onsite solar, can help shave electricity demand at the most expensive time of the day. In one of his latest blogs Christian Belady, general manager Data Center Services Microsoft, goes one step further. He raised the question  “Why do data centers need to be connected to a dirty, expensive, unreliable electrical grid?” and gave the answer: “They don’t and they don’t want to be either. Integrating a data center directly into the power plant — what we are calling our Data Plant program — will allow a data center to pick its sustainable fuel source and shield itself from grid volatility.”
  • Location strategy for renewable energy; The current and projected supply of clean electricity varies significantly between nations and regions, and renewable energy growth is largely determined by the energy and investment policies in those places and how the utilities and electricity markets are structured. Given the scale and long-lived nature of data centers, in order to ensure that the supply of clean energy can keep pace with IT’s rapidly growing demand, companies need to make a corporate commitment to engage in energy policy decisions in regions where they establish operations.

Knowing by measuring, managing by knowing

There has been a notable absence of  CUE reporting under companies. An important issue that has to be solved is that a lot of companies don’t know their carbon emission because they don’t measure it. And if you can not measure it, you can not improve it. Proper measures can only be made if there is a clear understanding of the problem. Therefore operators must begin with monitoring and reporting the carbon intensity of their data centers under the ‘new’ Carbon Utilization Effectiveness (CUE) standard.

UPDATE  May 5th

The Green Grid made an official response to the Greenpeace report in short The Green Grid is stating that:

Any study or initiative that raises awareness around the important issues of reducing emissions and increasing energy efficiency and sustainability in the data center and cloud computing sectors is something that The Green Grid supports”  and “We welcome the public interest Greenpeace has generated around this report and also encourage the IT industry to think about the complex idea of a ‘green data center’ in a holistic manner. By properly leveraging metrics like PUE, CUE and WUE alongside other models like The Green Grid’s Data Center Maturity Model, organizations can better understand the broader picture of their energy ecosystems and take steps to become more efficient and sustainable both inside and outside of the data center. Similarly, we encourage organizations of all sizes to actively participate in this important conversation by becoming members The Green Grid.

Data centers are expected to consume 19% more energy

The world’s data centers are expected to consume 19% more energy in the next 12 months than they have in the past year, according to results of a global industry census conducted by DatacenterDynamics (DCD). An interesting conclusion in the light of the report Jonathan Koomey released (new study) on data center electricity use in 2010. Which was a follow-up of the 2008 article: “Worldwide electricity used in data centers.

The 2007 EPA report to Congress on data centers (US EPA 2007) predicted a little less than a doubling in total data center electricity use from 2005 to 2010 if historical trends continued. But instead of this, In the U.S., the electricity used by data centers from 2005 to 2010 increased about 36 percent instead of doubling. And worldwide electricity consumption by data centers increased about 56 percent from 2005 to 2010 instead of doubling.

With the DCD forecast of a 19% energy growth the next 12 months it looks we are back on track again.

Data centers currently consume about 31GW, the census concludes. The average total power to rack is about 4.05kW, with 58% of racks consuming 5kW per rack, 28% consuming from 5kW to 10kW per rack and the rest consuming more than 10kW per rack.

Because energy demand is expected to rise so much, data center owners and operators are concerned about energy cost and availability. Analysis of the census data concluded that energy cost and availability is the number-one concern for them.

  • 44% believe that increased energy costs will impact significantly on their data center operations in the next 12 months – this is the highest ranked issue
  • 29% are concerned about the significant impact of energy availability (or the lack of it).

Energy concerns (c) DatacenterDynamics Global Industry Census 2011

Data center monitoring is directed by the priority to maintain availability (56%), reducing costs (31%) and reducing environmental impact scored 13%. According to DCD monitoring of energy efficiency is only conducted continuously by a minority of 42% although an equivalent proportion monitor it less regularly. This pattern is repeated for carbon emissions and is consistent with a lower priority given to the environmental impact of the data center.

Energy monitoring (c) DatacenterDynamics Global Industry Census 2011

Nevertheless these concerns big data centers are still being build in areas (for example London and Amsterdam) were lack of power supply has been touted as a supply constraining issue for years.

For example in the London arena:

  • Telehouse West, opened last March, 7.5MW of new capacity.
  • Telecity Harbour Exchange, 6MW opening in 2 phases.

And in the Amsterdam arena:

  • Switch 8.320 m2
  • Equinix  AM3 (in two phases 6400 m2 )
  • Terremark 2800 m2 first phase (10.000 m2 additional)

How can we explain these activities if power is in such tight supply?

Greener IT Can Form a Solid Base For a Low-Carbon Society

Greening ITPrecisely a year a go we launched  the book Greening IT in print and online (free to download). And if I may say so, the book is still worth the effort of reading.

The book aims at promoting awareness of the potential of Greening IT, such as Smart Grid, Cloud Computing, Thin Clients and Greening Supply Chains. The chapter “Why Green IT is Hard – An Economic Perspective” is my contribution to this book. See Greening IT and read the following press release.

Press release Greening IT

Information Technology holds a great potential in making society greener. Information Technology will, if we use it wisely, lead the way to resource efficiency, energy savings and greenhouse gas emission reductions – taking us to the Low-Carbon Society.

The IT sector itself, responsible for 2% of global greenhouse gas emissions, can get greener by focusing on energy efficiency and better technologies – we call this Green IT. Yet, IT also has the potential to reduce the remaining 98% of emissions from other sectors of the economy – by optimising resource use and saving energy etc. We call this the process of Greening IT. IT can provide the technological fixes we need to reduce a large amount of greenhouse gas emissions from other sectors of society and obtain a rapid stabilisation of global emissions. There is no other sector where the opportunities for greenhouse gas emission reductions, through the services provided, holds such a potential as the IT industry”, says Adrian Sobotta, president of the Greening IT Initiative,   Founding Editor and author of the book.

In her foreword to the book, European Commissioner for Climate Action, Connie Hedegaard writes: “All sectors of the economy will need to contribute…, and it is clear that information and communication technologies (ICTs) have a key role to play. ICTs are increasingly recognised as important enablers of the low-carbon transition. They offer significant potential – much of it presently untapped – to mitigate our emissions. This book focuses on this fundamental role which ICTs play in the transition to a low-carbon society.”

The book aims at promoting awareness of the potential of Greening IT, such as Smart Grid, Cloud Computing and thin clients. It is the result of an internationally collaborative, non-profit making, Creative Commons-licensed effort – to promote greening IT.

There is no single perfect solution; Green IT is not a silver bullet. But already today, we have a number of solutions ready to do their part of the work in greening society. And enough proven solutions and implementations for us to argue not only that IT has gone green, but also that IT is a potent enabler of greenhouse gas emission reductions”, says Adrian Sobotta.

It is clear that the messages in the book put a lot of faith into technologies. Yet, technologies will not stand alone in this immense task that lies before us. “Technology will take us only so far. Changing human behaviour and consumption patterns is the only real solution in the longer-term perspective”, continues Adrian Sobotta. IT may support this task, by confronting us with our real-time consumption – for instance through Smart Grid and Smart Meters – thereby forcing some of us to realise our impact.

But technologies, such as Green Information Technologies, are not going to disperse themselves. Before betting on new technologies, we need to establish long-term security of investments. And the only way to do this is to have an agreed long-term set of policy decisions that create the right incentives to promote the development we need.

Asian Tigers still have something to learn about Green IT

Asian TigersIn Asia, the larger data centres tend to be based in the most expensive cities such as Tokyo, Hong Kong, Singapore or Shanghai. For almost ten years there is an impressive and continuous growth in data centers in the Asia-Pacific market. According to Chengyu Wu from Frost & Sullivan this growth in Asia-Pac will continue at a CAGR (compound annual growth rate) of 14.6 percent (2009-2011). Demand for data centre hosting, Wu adds, currently exceeds supply. “In fact, over 80 percent of the major data centres in Asia-Pacific are running at close to 90 percent capacity and space is at a premium.”

While the outlook appears highly promising, data centre operators struggle with the high cost of operations which have increased exponentially in recent times. According to Frost & Sullivan director Jayesh Easwaramony, “Power costs can often account for more than 50 percent of the overall operational expenditure (OPEX) of a data centre, while real estate pricing could also seriously inflate costs.

Begin this year the ZDNet Asia IT Priorities 2010 survey showed that, the green initiatives scored the lowest as an IT priority among Asian businesses. In a recent interview of ZDNet Asia, Chris McPherson of Raritan stated that Asian companies are not yet seeing the full importance of implementing green technologies.

This is a strange thing given the incentives of expensive data center locations, the enormous power costs, and not to forget that power and cooling and floor space form together a certain data center threshold and therefore can prohibit growth. The demand for IT capacity can’t go beyond this threshold because of power shortage or overheating and/or lack of floor space. This create the risk that suddenly demand for IT capacity can’t be fulfilled and it will come to a grinding halt.

One way to “push” for green uptake, McPherson said, is to have governments either reduce subsidized power bills or increase subsidies for green energy. However, these incentives are currently slow and minimal. Analyst Chengyu Wu pointed out that discussions have centered mainly around concepts such as virtualization and utility computing emerging in the data center segment. McPherson agreed that virtualization is one way to help companies manage green costs since not as many servers need to be deployed, which consequently brings about savings in real estate expenditure. McPherson emphasized that employing information and management tools that helps companies to find out what is really happening at their device level and measure the individual energy consumption in order to make better decisions on reducing spending and maximizing savings. “The electricity bill is for the total cost of the data center, but to break that down as to what each component is costing you, it is only recently that tools are available to do so,” he said.

To use these tools, first a paradox must be solved. Because if you don’t see the issue why spending money on tools? So who starts using them? Who is aware about the issue, who is responsible, who feels the pain and take action? We need publications of real cases in reducing energy consumption and reducing energy costs in the data center environment to get things started.

P.S. Are the Asian Tigers the only one that still have something to learn about Green IT? I don’t think so …

Energy Elasticity: The Adaptive Data Center

Data centers are significant consumers of energy and it is also increasingly clear that there is much room for improved energy usage. However, there does seem to be a preference for focusing on energy efficiency rather than energy elasticity. Energy elasticity is the degree in which energy consumption is changing when the workload to be processed is changing. For example, IT infrastructure which has a high degree of energy elasticity is one characterised by consuming significantly less power when it’s idle compared to when it’s running at its maximum processing potential. Conversely, an IT infrastructure which has a low degree of energy elasticity consumes almost the same amount of electricity whether it’s in use or idle.  We can use this simple equation:

Elasticity = (% change in workload / % change in energy usage)

If elasticity is greater than or equal to one, the curve is considered to be elastic. If it is less than one, the curve is said to be inelastic.

Given the fact that it isn’t unusual that servers operating under the ten per cent average utilization and most servers don’t have a high energy elasticity (According to IDC, a server operating at 10% utilization still consumes the same power and cooling as a server operating at 75% utilization) it is worthwhile to focus more on energy elasticity. A picture can say more than words so this energy elasticity issue is very good visualized in a presentation of Clemens Pfeiffer CTO of Power Assure, at the NASA IT Summit 2010. As you can see without optimization, energy elasticity, power consumption is indifferent to changes in application load.

Load Optimization (c)Power Assure


Barroso and Holzle of Google have made the case for energy proportional (energy elastic) computing based on the observation that servers in data centers to-day operate at well below peak load levels on an average. According to them energy-efficiency characteristics is primarily the responsibility of component and system designers, ”They should aim to develop machines that consume energy in proportion to the amount of work performed”. A popular technique for delivering someway of energy proportional behavior in servers right now is consolidation using virtualization. By abstracting your application from the hardware, you could shift things across a data center dynamically. These techniques

  • utilize heterogeneity to select the most power-efficient servers at any given time
  • utilize live Virtual Machine (VM) migration to vary the number of active servers in response to workload variation
  • provide control over power consumption by allowing the number of active servers to be increased or decreased one at a time.

Although servers are the biggest consumers of energy, storage and network devices are also consumers. In the EPA Report to Congress on Server and Data Center Energy Efficiency is suggested that, servers will on average account for about 75 percent of total IT equipment energy use, storage devices will account for around 15 percent, and network equipment will account for around 10 percent. For storage and network devices energy elasticity is also a relevant issue.


Organizations have increased demand for storing digital data, both in terms of amount and duration due to new and existing applications and to regulations. As stated in a research of Florida University and IBM it is expected that storage energy consumption will continue to increase in the future as data volumes grow and disk performance and capacity scaling slow:

  • storage capacity per drive is increasing more slowly, which will force the acquisition of more drives to accommodate growing capacity requirements
  • performance improvements per drive have not and will not keep pace with capacity improvements.

Storage will therefore consuming an increasing percentage of the energy that is being used by the IT infrastructure. Of the data set that is being stored only a small set is active. So it is the same story as for the servers, on an average storage operate at well below peak load levels. A potential energy reduction of 40-75% by using a energy proportional system is claimed. According to the same research there are some storage energy saving techniques available:

  • Consolidation: Aggregation of data into fewer storage devices whenever performance requirements permit.
  • Tiering/Migration: Placement/movement of data into storage devices that best fit its performance requirements
  • Write off-loading: Diversion of newly written data to enable spinning down disks for longer periods
  • Adaptive seek speeds: Allow trading off performance for power reduction by slowing the seek and waiting an additional rotational delay before servicing the I/O.
  • Workload shaping: Batching I/O requests to allow hard disks to enter low power modes for extended periods, or to allow workload mix optimizations .
  • Opportunistic spindown: Spinning down hard disks when idle for a given period.
  • Spindown/MAID: Maintaining disks with unused data spundown most of the time.
  • Dedup/compression: storing smaller amounts of data using very efficient

Storage virtualization can also help but component and system designers should aim to develop machines that consume energy in proportion to the amount of work performed. There is still a way to go to get energy elastic storage.


According to a paper of the USENIX conference NSDI’10 “today’s network elements are also not energy proportional: fixed overheads such as fans, switch chips, and transceivers waste power at low loads. Even though the traffic varies significantly with time, the rack and aggregation switches associated with these servers draw constant power.” And again the same recipe dooms up, component and system designers should aim to develop machines that consume energy in proportion to the amount of work performed. On the other hand, as explained in the paper, some kind of network optimizer must monitor traffic requirements. Choosing and adjusting the network components to meet those energy, performance and fault tolerance requirements and powers down as many unneeded links and switches as possible. In this way, on average, savings of 25-40% of the network energy in data centers is claimed.


Making servers, storage and the network in data centers energy-proportional we will also need to take air-conditioning and cooling needs into account. Fluctuations in energy usage is equivalent to fluctuations in warmth, and the question is if air-conditioning can be quickly zoned up and down to cool the particular data center zones that see increased server, storage or network use. As Dave Craven of Spinwave Systems, stated in a recent editorial article of the Processor “Unfortunately, the mechanical systems used to cool and ventilate large data centers haven’t kept up with technological advances seen in the IT world”. “Many buildings where they are putting newer technology and processes are still being heated and cooled by processes designed 20 years ago” Craven adds to this. Given the fact that the PUE is driven by the cooling efficiency (see for example the white paper of Trendpoint) it looks like cooling is the weak spot to create an energy elastic data center.

Next step

The idea of ‘disabling’ critical infrastructure components in data centers has been considered taboo. Any dynamic energy management system that attempts to achieve energy elasticity (proportionality) by powering off a subset of idle components must demonstrate that the active components can still meet the current offered load, as well for a rapid inactive-to-active mode transition and/or can meet changing load in the immediate future. The power savings must be worthwhile, performance effects must be minimal, and fault tolerance must not be sacrificed.

Energy management has emerged as one of the most significant challenges faced by data center operators. Defining this energy management control knob to tune between energy efficiency, performance, and fault tolerance, must come from a combination of improved components and improved component management. The data center is a dynamic complex system with a lot of interdependencies. Managing, orchestrating, these kinds of systems ask for sophisticated math models and software that uses algorithms to automatically make the necessary adjustments in the system.