Green Cloud Computing. Which way to go?

What is the right cloud architecture to create a green and sustainable cloud? Should we consolidate to huge mega data centers or is there another way to go?

The analogy

Currently data centers are constructed on the intersection of the electrical energy infrastructure and the network (data) infrastructure. Considering the current electrical energy infrastructure, for this moment the trend is consolidation of data centers to mega data centers, with the principle of economy of scale as the thriving force.

In the book “The Big Switch” Nicholas Carr makes a historical analysis to create the idea that data centers in combination with the Internet following the same developmental path as electric power did 100 years ago. At that time companies stopped generating their own power and plugged into the newly built electric grid that was fed with electric energy produced by huge generic power plants. The Big Switch is the change from today’s proprietary corporate data centers to what Carr calls the world wide computer. Basically the cloud, with some huge generic data centers,  provides IT services that will be as ubiquitous, networked and shared as the electricity infrastructure now is. This modern cloud computing infrastructure is following the same structure as the electricity infrastructure: the plant (data center), transmission network (Internet) and distribution networks (MAN, (W)LAN) to give process power and storage services to all kind of end-devices.

So this is a nice analogy but is the analogy right? Is the current power grid architecture able to accommodate the ever rising energy demands?  And by taking the current power grid architecture as an example for the cloud infrastructure architecture do we really get a sustainable, robust IT infrastructure by centralizing IT services in mega data centers?

Not everybody is following the line of reasoning of the Big Switch.

A hitch in the network

While previous studies of energy consumption in cloud computing have focused only on the energy consumed in the data center, researchers 1 from the University of Melbourne in Victoria, Australia, found that transporting data between data centers and local computers can consume even larger amounts of energy than storing it. They investigated the use of cloud computing for  storage, software, and processing services; on public and private systems.

The reduction of energy consumption depends on the use case. Using infrequently and at low intensities, cloud computing can consume less power than conventional computing. But at medium and high usage levels, transport dominates the total power consumption and greatly increases the overall energy consumption. The researchers explain that home computer users can achieve significant energy savings by using low-end laptops for routine tasks and cloud processing services for computationally intensive tasks that are infrequent, instead of using a mid- or high-end PC. For corporations, it is less clear whether the energy consumption saved in transport with a private cloud compared to a public cloud offsets the private cloud’s higher energy consumption.

A hitch in the power grid 

A very specific element of an electrical power infrastructure is that there is no storage. Therefore demand and supply must be the same, in equilibrium, else there is the risk that this infrastructure shuts down. A controlling agency must coordinate the dispatch of generating units of electricity to meet the expected demand of the system across the power grid. This is a complex management task with the ever fluctuating energy demands.

Another issue is that the power grid is suffering huge energy losses, this loss from primary energy source to the actual delivery of electrical power at the data center, is almost 70% (around 67% conversion loss for a traditional power plant conversion loss and 8-10% transmission grid loss). In some parts of the world there are certain critical locations, also known as Critical Areas for Transmission Congestion, were there is insufficient capacity to meet the demand at peak periods2.

Indirect mega data centers are part of this 70% energy loss in the power grid and the power grid capacity and delivery issues.

These are examples that by a traditional scale up of capacity by centralization and a simple-minded reach for economy of scale we are neglecting the tradeoffs of growing management complexity of the central node and capacity issues of the network.

The analogy one step beyond

But where Carr stops using the power grid analogy we go one step beyond. Current developments in the electrical energy infrastructure shows local power generation based on alternative, renewable, energy sources such as wind and solar energy. Local power generation that, with improvements of the current technology, could even lead to local energy self-sufficiency. The two kind of approaches can even be mixed in a hybrid service model where a macro, centralized, delivery model works together with a localized delivery model using intelligent two-way digital technology to control power supply.

Using this as an analogy another cloud industry development, or next step or next phase, can be envisioned.

Taking another direction

Instead of relying only on a cloud with centralized mega data centers there is another solution, another paradigm, that is much more focussing on an intelligent localized delivery of services and local power generation: the micro data center.

This new distributed systems architecture with a swarm of collaborating data centers should create a sustainable distributed data center grid and take care of the issues that accompanies a centralization approach.  A cloud architecture were data centers are scale out instead of scale up.

In delivering computer process power and storage capacity there are two opposite cloud computing approaches, the mega data center  “bigger is better” and the local micro data center “small is beautiful”. The current “bigger is better” model of cloud computing leads, although shifted from customer to supplier, still to enormous capital expenditures, problems in power usage and cooling, power supply and leads also to some structural vulnerabilities in terms of resiliency and availability of the infrastructure. The alternative p2p data center approach leads to questions about delivering enough processing power, network capacity, network supply and the governance of such a distributed system.

Is this so called Energy Self Sufficient Data Center concept science fiction?

Examples

An example of this hybrid approach is developed in Amsterdam by the OZZO project. The OZZO Project’s mission is to ‘Build an energy-neutral data center in the Amsterdam Metropolitan Area before the end of 2015. This data center produces by itself all the energy it needs, with neither CO2 emission nor nuclear power.’ According to OZZO the data center should function within a smart three-layer grid: for data, electrical energy, and thermal energy. Data processing and storage move fluidly over the grid in response to real-time local facility and energy intelligence, always looking for optimum efficiency.

Another example for the distributed data center concept is a new paper from Microsoft Research,The Data Furnace: Heating Up with Cloud Computing. According to this research the problem of heat generation in data centers can be turned into an advantage: computers can be placed directly into buildings to provide low latency cloud computing for its offices or residents, and the heat that is generated can be used to heat the building. By piggy-backing on the energy use for building heating, the IT industry could grow for some time in size without increasing its carbon footprint or its load on the power grid and generation systems.

Future

How to create a green and sustainable cloud computing industry? Just scale up by consolidating data centers to huge mega data centers with the help of the current power grid is to simplistic and is also creating all kind of issues. Using developments in the power grid infrastructure as an analogy we can envision another solution direction; creating a smart grid of micro data centers. But still a lot of research has to be done before we have a working data center grid.

For the current moment with the trend of consolidation of data centers to mega data centers, based on the thriving force of economy of scale, the emphasis should be made on data center efficiency and the usage of renewable energy. Although we should take the Jevons paradox in considering; increases in the efficiency of using a resource tends to increase the usage of that resource, but we should appreciate that every kilowatt that isn’t used also doesn’t have to be generated.

[Article is also published on Cloud Computing Economics blog]

Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport, Jayant Baliga et all. Proceedings of the IEEE, Issue Date: Jan. 2011.

DOE, “National Electric Transmission Congestion Study.”, 2006.

3 The Data Furnace: Heating Up with Cloud Computing, Jie Liu et all., Microsoft Research, June 2011.

Texas escaped rolling blackouts: Data centers and the power grid interdependency

In Texas, a state with data centers of several notable IT companies, including WordPress.Com, Cisco, Rackspace and Host Gator, the power grid company ERCOT have been working around the clock to keep the electricity flowing, and to avoid rolling blackouts as power demand reaches record levels.

According to The Wallstreet Journal for the second year in a row, ERCOT, underestimated summer demand in its forecasts. ERCOT’s forecasts are based on an average of the past 10 summers, but the past two years have been unusually hot, and this is pushing up energy use. With almost 40 consecutive days of temperatures of more than 37 Celsius (100 degrees F) it was the hottest start to August in Texas history. The drought in the southern U.S. is exceptional as can be seen in the map below, see also the 12-week animation of the U.S. drought monitor.

Drought monitoring

Texas has its own power grid, regulated and managed by  Electric Reliability Council of Texas (ERCOT). The Texas Interconnect supplies its own energy and is completely independent of the Eastern and Western Interconnects, which means that Texas can’t get help from other places when it runs short of power.

On the second of august  ERCOT even put out a notice  saying the state’s reserve levels dropped below 2,300 megawatts, putting into effect an Energy Emergency Alert level 1.“We are requesting that consumers and businesses reduce their electricity use during peak electricity hours from 3 to 7 p.m. today, particularly between 4 and 5 p.m. when we expect to hit another peak demand record,” said Kent Saathoff, vice president of system planning and operations. “We do not know at this time if additional emergency steps will be needed.” ERCOT only has peak capacity of 73,000 megawatts this time of year, and about 3,000 megawatts is offline for repairs at any given time. ERCOT recorded an all-time peak demand for electricity: 68,296 megawatts. ERCOT thus narrowly avoided instituting rolling blackouts.

Texas energy demand

According to an Aug. 2 blog article by Elizabeth Souder of the Dallas Morning News, “The high temperatures also caused about 20 power plants to stop working, including at least one coal-fired plant and natural gas plants.” Souder noted that a spokesman for ERCOT, “said such outages aren’t unusual in the hot summer…”

The demand for energy sent prices sky high, topping out at $2,500 per megawatt-hour on Friday afternoon, more than 50 times the on-peak wholesale average, according to the U.S. Energy Information Administration.

ERCOT energy prices

The power plants are also in a different way under siege. The drought shows a structural problem with the U.S. energy sector: it needs a lot of water to operate. Power plants account for 49 percent of the nation’s water withdrawals (according to the U.S. Geological Survey). Levels of “extreme” and “exceptional” drought grew to 94.27 percent of the state of Texas. The drought and triple-digit temperatures (F) have broken numerous records and already left the southern Plains and Mississippi Valley struggling to meet demand for power and water. A prolonged drought such as in Texas can force power plants to shut down because their supply of circulating cooling water runs out or the cooling water is not cool enough (which happens in  2007 when several power plants had to shut down or run at a lower capacity because there was not enough water. As showed in a study from the University of Texas at Austin alternative cooling technologies, such as cooling towers and hybrid wet–dry or dry cooling, present opportunities to reduce water diversions.

Although we didn’t hear much from the data center operators about the current threat to the power grid, the Texas case shows very clearly the interdependency between data centers as huge energy consumers and the power grid, the water distribution systems and the weather and climate conditions.

Data centers are part of a complex electrical power value chain. People are mostly not aware of this value chain and the energy losses in this value chain. As a customer of cloud computing and/or data center services but also a data center provider you must have a good understanding of the power grid to appreciate the risks that are at stake in terms of resiliency and business continuity.  The power grid, and water distribution systems are struggling to survive a record breaking drought across the southern United States. That is also a wake up call for data center users and providers to rethink the energy efficiency and the energy consumption of their data centers.

Saving a kiloWatt at the end of this power value chain saves a lot of energy. This can offer some relief to the current power grid. It can be shown that by saving 1 unit power consumption in information processing saves us about 98 units in the upstream of the power value chain. See the blog entry Following the data center energy track 

Lefdal mines: green mineral went out green data center goes in

Lefdal mine

The huge Lefdal mine system

According to a press release of the Norwegian government, IBM and Lefdal mines have agreed on a memorandum of understanding to develop Lefdal Mine in Nordfjord, Sogn og Fjordane as one of the largest and leading centers for green data storage in Europe.

Lefdal mines is one of several in Norway, which has strongly committed themself to house green data centers. This can be an important growth industry for Norway, particularly in rural areas, which have large advantage in the supply of locally produced power, large and inexpensive land and also stable geological conditions. This is a competitive advantage when to facilitate this type of value creation. The letter of intent shows that Norway is an attractive country for this new industry.

This industry will offer exciting technological challenges for different players – both on the IT side, security side and on the operational side. It can mean knowledge-intensive jobs for the local communities that receive these centers.

– I am happy for this letter of agreement and hope it will provide positive effects for rural Norway. This can also help to provide new knowledge jobs and creating value in new and future-oriented industries. Such jobs are helping to ensure settlement in the country, says the Local Government and Regional Minister Liv Signe.

I congratulate IBM and Lefdal Mine with the letter of intent. The need for data storage is increasing rapidly. Norway has all the prerequisites for providing good and environmentally friendly solutions to the demanding international pioneer companies like IBM. We have a natural cooling, good technology and capital environment, good broadband infrastructure and stable conditions, both geologically and politically. This is sustainable value creation of the kind we want more of, says Trade and Industry, Trond Giske.

Lefdal mines is one of several other Norwegian initiatives in constructing large data centers inside mountains see the blog entry

Green data centers: digging up the mountains

and

Underground green datacenter city‘.

The Greening of IT: Successes, Failures, and the Future?

The Uptime Institute uploaded an interesting discussion.

It is the keynote panel discussion from Uptime Institute Symposium 2011, were George Goodman (Climate Savers Computing Initiative), Jon Haas (The Green Grid), KC Mares (Silicon Valley Leadership Group), Bruce Myatt (Critical Facilities Round Table), and Pitt Turner (Uptime Institute) join Andy Lawrence (The 451 Group) to discuss how much the industry has improved its sustainability performance in the past three years and where it should go next.

And as usual the discussion is focussing on the supply side (data center providers), technology and energy measurements.

According to the common view, Green IT comes down to implementing technical measures. The idea is that, given more efficient power usage of servers, storage and network components, virtualization, better power and cooling management in data centers, the problems can be solved. But is this really true? The rea-son IT is not green at this moment is at least as much due to perverse incentives. Green IT is about power and money, about raising barriers to trade, segmenting markets and differentiating products. Many of the problems can be explained better and convincingly using the language of economics: supply chains, asymmetric information, moral hazard, switching and transaction costs and innovation. Green IT is not a technical problem, but an economical problem to be solved. That is the reason that Greening IT is difficult.

Download the book Greening IT at http://greening.it to get a better grip and understanding on these subjects.

Will ISO Energy Management Standard ISO 50001 make the difference?

Energy ManagementOn schedule, the new international ISO standard about energy management, ISO 50001, is now available on the ISO Website www.iso.org. ISO 50001 will establish a framework to manage energy for industrial plants; commercial, institutional, or governmental facilities; or entire organizations. Targeting broad applicability across national economic sectors, according to ISO it is estimated that the standard could influence up to 60 % of the world’s energy use.

The above estimate is based on information provided in the section, “ World Energy Demand and Economic Outlook ”, in the International Energy Outlook 2010, published by the US Energy Information Administration. This cites 2007 figures on global energy consumption by sector, including 7 % by the commercial sector (defined as businesses, institutions, and organizations that provide services), and 51 % by the industrial sector (including manufacturing, agriculture, mining, and construction).

As ISO 50001 is primarily targeted at the commercial and industrial sectors, adding the above figures provides an approximate total of 60 % of global energy demand on which the standard could have a positive impact.

In addition, ISO is launching the standard on 17 June at the Geneva International Conference Centre (CICG). Presentations on the following themes are planned:

  • ISO 50001 within the context of ISO standards in general and how they can contribute to solving global problems
  • A description of ISO 50001 and its benefits
  • How the standard was developed, who was involved and how they overcame challenges
  • What ISO 50001 can do for developing countries.

Also some supporting documents became available such as a general explanation of the standard, a special edition of  the ISO magazine ISO Focus+ and also a video.

 

 

Question is will this ISO standard make a difference in energy consumption reduction? There is well-known efficiency paradox.

The effect that increases in energy efficiency raise energy consumption is known in economics as the Khazzoom-Brookes Postulate. This is explained by the fact that on the micro level increases in energy efficiency leading to lower costs of energy, and on the macro level side increases in energy efficiency leads to increased economic growth. The Khazzoom-Brookes Postulate is a special case of what in economics is called the Jevons paradox, increases in the efficiency of using a resource tends to increase the usage of that resource.

How do yo manage this paradox?

Cure against Data Center sprawl: consolidation and cloud computing.

Last year Federal Chief Information Officer Vivek Kundra said at a policy forum that in the past decade, the number of data centers operated by the U.S. government has skyrocketed from 432 to more than 1,200. Kundra stated that “Now, when you think about these data centers, one of the most troubling aspects about the data centers is that in a lot of these cases, we’re finding that server utilization is actually around seven percent, that’s unacceptable when you think about all the resources that we’ve invested. And the other thing we’re finding is that in terms of energy consumption, that the trajectory, it’s a one-way street where we continue to consume more and more energy, and these data centers tend to be energy hogs, and we need to find a fundamentally different strategy as we think about bending this curve as far as data center growth is concerned.”

A Federal Data Center Consolidation Initiative followed. In December 2010, the Federal CIO made data center consolidation a key tenet of the comprehensive 25-Point Implementation Plan to Reform Federal IT Management.  These reforms must change the status quo by:

• Using modular approach, drive average size and duration down and success rates up on nearly $50 billion of IT program spend

• Improve yield on $24 billion in IT infrastructure spending and shift spending from redundant, underutilized infrastructure to mission-priority programs

• Utilizing “Cloud First” approach, provision solutions on demand at up to 50% lower per unit cost

Under this plan, agencies will close at least 800 data centers by 2015, a reduction of approximately 40%. As part of this consolidation initiative, 137 data centers will be closed by the end of 2011.

Now the first results are presented (View the Presentation). From 137 data centers that must be closed this year, already 39 are closed.

Data Center Sprawl and consolidation

There is an interactive map  available to track the data center consolidation progress to date.

Agencies were also required to identify at least three services to move to the cloud. A full list can be downloaded here. Some examples:

Email/Collaboration:  15 agencies identified approximately 950,000 mailboxes and over 100 email systems that will move to the cloud

Infrastructure: DOJ is consolidating storage solutions across 250 locations for 18,000 U.S. Attorneys to a single cloud platform

Workflow:  USDA is consolidating over 20 document and correspondence systems into a single agency-wide cloud solution

Back Office: Hundreds of human resource and financial management systems will be consolidated in the cloud

All very impressive figures on cost reduction, time to deliver and energy savings, which government will follow which such an ambitious program?

Greener IT Can Form a Solid Base For a Low-Carbon Society

Greening ITPrecisely a year a go we launched  the book Greening IT in print and online (free to download). And if I may say so, the book is still worth the effort of reading.

The book aims at promoting awareness of the potential of Greening IT, such as Smart Grid, Cloud Computing, Thin Clients and Greening Supply Chains. The chapter “Why Green IT is Hard – An Economic Perspective” is my contribution to this book. See Greening IT and read the following press release.

Press release Greening IT

Information Technology holds a great potential in making society greener. Information Technology will, if we use it wisely, lead the way to resource efficiency, energy savings and greenhouse gas emission reductions – taking us to the Low-Carbon Society.

The IT sector itself, responsible for 2% of global greenhouse gas emissions, can get greener by focusing on energy efficiency and better technologies – we call this Green IT. Yet, IT also has the potential to reduce the remaining 98% of emissions from other sectors of the economy – by optimising resource use and saving energy etc. We call this the process of Greening IT. IT can provide the technological fixes we need to reduce a large amount of greenhouse gas emissions from other sectors of society and obtain a rapid stabilisation of global emissions. There is no other sector where the opportunities for greenhouse gas emission reductions, through the services provided, holds such a potential as the IT industry”, says Adrian Sobotta, president of the Greening IT Initiative,   Founding Editor and author of the book.

In her foreword to the book, European Commissioner for Climate Action, Connie Hedegaard writes: “All sectors of the economy will need to contribute…, and it is clear that information and communication technologies (ICTs) have a key role to play. ICTs are increasingly recognised as important enablers of the low-carbon transition. They offer significant potential – much of it presently untapped – to mitigate our emissions. This book focuses on this fundamental role which ICTs play in the transition to a low-carbon society.”

The book aims at promoting awareness of the potential of Greening IT, such as Smart Grid, Cloud Computing and thin clients. It is the result of an internationally collaborative, non-profit making, Creative Commons-licensed effort – to promote greening IT.

There is no single perfect solution; Green IT is not a silver bullet. But already today, we have a number of solutions ready to do their part of the work in greening society. And enough proven solutions and implementations for us to argue not only that IT has gone green, but also that IT is a potent enabler of greenhouse gas emission reductions”, says Adrian Sobotta.

It is clear that the messages in the book put a lot of faith into technologies. Yet, technologies will not stand alone in this immense task that lies before us. “Technology will take us only so far. Changing human behaviour and consumption patterns is the only real solution in the longer-term perspective”, continues Adrian Sobotta. IT may support this task, by confronting us with our real-time consumption – for instance through Smart Grid and Smart Meters – thereby forcing some of us to realise our impact.

But technologies, such as Green Information Technologies, are not going to disperse themselves. Before betting on new technologies, we need to establish long-term security of investments. And the only way to do this is to have an agreed long-term set of policy decisions that create the right incentives to promote the development we need.

Emerald: greening data storage

SNIA EmeraldThe primary function of a data center is data processing which is provided by the combination of servers, storage and networking.

But how much data center energy usage is due to storage? This depends on the purpose of the datacenter in general or more specific the application (for example online transaction processing, multimedia content delivery, computationally intensive loads ), the accompanying i/o profiles and the storage design decisions on data integrity, availability, and reliability.

In the EPA Report to Congress on Server and Data Center Energy Efficiency it is suggested that, servers will on average account for about 75 percent of total IT equipment energy use, storage devices will account for around 15 percent, and network equipment will account for around 10 percent.

This estimated ratio is changing. The proportion of energy used by storage is increasing because of consolidation and virtualization of servers according the SNIA.

If we see the enormous growth of data being stored, 57% CAGR during 2006-2011, and that according to SNIA 25% of ‘digital universe’ is unique, but 75% are replicas / duplicates partly to ensure data integrity and survivability, partly wasteful, we have something to win in power consumption.

Cost Data Storage

Green storage technologies must lead to less raw capacity to store and use the same data set but also in improving:

  • Operational envelope (ASHRAE)
  • Speed of Disks
  • Use of appropriate RAID levels
  • Disk utilization

For a better understanding of storage specific power and cooling data you also must be aware of the difference between full-load versus stand-by power versus idle situations.

With the Emerald™ Program, SNIA wants to provide public access to storage system power usage and efficiency. The SNIA has developed a test methodology and metric for measuring the power used by storage systems. There are two metrics being developed: An Active and a Passive test metric. Later they will also provide metrics or addendums to the existing metrics that take additional storage factors into account, e.g., data deduplication. To acquire the metrics requires a test plan that must be followed, and results will be posted on the SNIA Emerald website. The full SNIA Emerald is scheduled to be fully operational in the second half of 2011.

For more information read the SNIA tutorial on Green storage .

Data centers: the men in the basement get noticed

“Data centers are attracting the attention of more CEOs and CFOs, not just for their energy use, but also because of the cost of facilities and the digital infrastructure – the entire system by which companies satisfy their computing needs,” said Martin McCarthy, Executive Chairman of Uptime Institute and CEO of The 451 Group at the Uptime Institute Symposium 2011.

(c) 5th Wave

Although IT infrastructure delivers no direct business value, much of the business value is created in business processes that depend upon a solid and stable IT infrastructure. With an IT infrastructure in place, you can run applications, but they can’t deliver any value without the physical IT infrastructure of server, storage, network components and data center facilities. So it is good news that the men in the basement get noticed by senior management.

At the symposium some interesting figures of the Uptime data center industry survey were presented:

  • 36% of data center facilities would run out of space, power or cooling in 2011-2012.
  • Less than 20% of respondents’ IT departments pay the data center power bill.
  • 73% of respondents said their facilities or real estate department pays the data-center power bill.
  • 8% didn’t know who pays the data-center power bill.
  • The most widely used energy efficiency improvement techniques, according to the Uptime survey, are server virtualization, used by 82% of respondents; hot aisle/cold aisle containment, used by 77% ; and power monitoring and measurement, used by 67% .
  • 57% of the respondents have raised the inlet temperatures on chillers, basically running their data centers at higher temperature
  • 46% of the respondents are using variable speed drives, which allow cooling fans to adjust their speed depending on the temperature.

For energy usage the question “Who is responsible, who feels the pain and take action ?” is still waiting for a definite answer. The current general financial structure  makes it far more difficult to drive data center energy efficiency to the right directions. It is better that the IT department should pay the energy bill that would give a real incentive to get more effective and efficient.

See also the blog entry “IT energy efficiency spark accounting debate …”.

Aging servers are big energy consumers in the data center

“The most wasteful energy consumers in data centers (especially in low-PUE data centers) are inefficient servers” according to Winston Saunders. He placed a very interesting blog on  “The Server Room Blog” of Intel.

As showed in the figure this data center case shows that in this particular data center servers older than 2007 consume 60% of the energy, but contribute only an estimated 4% of the compute capability.

William Carter, Elephant in Data Center

Based on this issue and the fact that servers efficiency closely followed Moore’s law he propose a new data center metric, the Server Usage Effectiveness or SUE metric.

SUE_Forumula

With an age of 0 years SUE sets Today = 1.0. With an age of 3 years SUE sets 2.8, which implies that you have approximately 2.8 times the number of servers you actually need (based on current data center productivity and workload).

The blog goes even one step further by combining SUE with PUE and defining the Total data center Usage Effectiveness,

TUE = PUE x SUE

This is a very interesting pragmatic approach to improve the energy consumption and energy efficiency of the data center. The power supply to the IT infrastructure (servers, storage and network) in a data center is not very effective. There is a tremendous loss in this energy supply chain (starting from power station, on to power grid, UPS, PDU, …) so that finally only a couple of percent of the original energy (power plant) is actual used for data processing. The only way to get quick results is to start on the demand side. This part of the infrastructure (IT equipment) is relatively easy to the change (months instead of years for components upstream in this energy supply chain). And last but not least a small result and the end of the chain gives a huge result upstream.

Just as for the PUE a first-order approximation is just right to find the “low hanging fruit”. I was even wondering if you could simplify things by just using the average age of servers instead of making cohort statistics as showed in the blog.

As for  power consumption of servers versus other IT equipment (storage and network devices). In the EPA Report to Congress on Server and Data Center Energy Efficiency it is suggested that, servers will on average account for about 75 percent of total IT equipment energy use, storage devices will account for around 15 percent, and network equipment will account for around 10 percent.

I’m very interested in the follow up of the blog entry of Saunders.