World Cup 2014: did you recorded a drop in the network energy use?

Netherlands_Spain2014Large events can have a serious impact on IT infrastructure. A great example of this is the current football World Cup 2014.

RIPE NCC, one of five Regional Internet Registries (RIRs), does a great job with collecting data of the network traffic during the event. In collaboration with the European Internet Exchange Association (Euro-IX) they follow the Internet Exchange Point (IXP) traffic during the championship.

Two examples from their research. The following graphs shows in red the traffic of the match day and the same day of the week in the weeks before as grey lines. The game times are indicated as grey rectangles in the background.

 The Netherlands – Spain

The traffic volume during the Netherlands-Spain game differs 256 Terabytes to the week before!

TheNetherlands_Spain2014

Traffic statistics at the Amsterdam Internet Exchange (AMS-IX).

 

Cameroon – Brazil

Cameroon_Brazil2014

Traffic statistics at the PTT Metro IXP in Sao Paulo, Brazil.

These traffic drops makes you wonder how much energy was saved by these games.

More information about the network traffic can be found at RIPE NCC and more information will follow during the World Cup.

Data Center 2.0 – The Sustainable Data Center, Now Available!

Data Center 2.0 – The Sustainable Data Center is now available. Data Center 2.0

The book is showing up on the websites of Amazon and will soon starts to pop up on websites of other  E-tailers’ .

Data Center 2.0 – The Sustainable Data Center is an in-depth look into the steps needed to transform modern-day data centers into sustainable entities.

See the press release:

Some nice endorsements were received:

“Data Center 2.0, is not so much about technology but about people, society and economic development. By helping readers understand that even if Data Centers, enabling the Digital economy, are contributing a lot to energy saving, they need to be sustainable themselves; Rien Dijkstra is on the right track. When explaining how to build sustainable Data Centers, through multi disciplinary approach, breaking the usual silos of the different expertise, Rien Dijkstra is proposing the change of behavior needed to build sustainable Data Centers. Definitely it is about people, not technology.” 

Paul-Francois Cattier, Global Senior Vice-President Data Center – Schneider Electric

“In Data Center 2.0 The Sustainable Data Center author Rien Dijkstra has gone several steps further in viewing the data center from the perspective of long term ownership and efficiency in combination with treating it as a system. It’s an excellent read with many sections that could be extracted and utilized in their own right. I highly recommend this read for IT leaders who are struggling with the questions of whether to add capacity (co-locate, buy, build, or lease) or how to create a stronger organizational ownership model for existing data center capacity. The questions get more complex every year and the risks more serious for the business. The fact that you’re making a business critical decision that must stand the test of technology and business change over 15 years is something you shouldn’t take lightly.” 

Mark Thiele, President and Founder Data Center Pulse

“Data centers used to be buildings to house computer servers along with network and storage systems, a physical manifestation of the Digital Economy. Internet of Things, the digitization of about everything in and around us, brings many profound changes. A data center is the place where it all comes together. Physical and digital life, fueled by energy and IT, economical and social demands and needs and not to forget sustainability considerations. Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy. A data center in that sense is much more than just a building for servers. It has become a new business model. Data center 2.0 is a remarkable book that describes the steps and phases to facilitate and achieve this paradigm.” 

John Post, Managing Director – Foundation Green IT Amsterdam region

Data Center 2.0 – The Sustainable Data Center

DC20_SustainableDataCenter
Currently busy with the final steps to get the forthcoming book ‘Data Center 2.0 – The Sustainable Data Center’ (ISBN 978-1499224689) published at the beginning of the summer.

Some quotes from the book:

“A data center is a very peculiar and special place. It is the place where different worlds meet each other. A place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT and energy world come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.Data Center 2.0

Given the great potential of Information Technology to transform today’s society into one characterised by sustainability what is the position of data centers?

……..

The data center is the place were it all comes together: energy, IT and societal demands and needs.

…….

A sustainable data center should be environmentally viable, economically equitable, and socially bearable. To become sustainable, the data center industry must free itself from the shackles of 19th century based ideas and concepts of production. They are too simple for our 21th century world.

The combination of service-dominant logic and cradle-to-cradle makes it possible to create a sustainability data center industry.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.”

The book takes a conceptual approach on the subject of data centers and sustainability. It offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

The book has already received endorsements of Paul-Francois Cattier Global Senior, Vice President Data Center of Schneider Electric and John Post, Managing Director of Foundation  Green IT Amsterdam region.

Table of contents

1 Prologue
2 Signs Of The Time
3 Data Centers, 21th Century Factories
4 Data Centers A Critical Infrastructure
5 Data Centers And The IT Supply Chain
6 The Core Processes Of A Data Center
7 Externalities
8 A Look At Data Center Management
9 Data Center Analysis
10 Data Center Monitoring and Control
11 The Willingness To Change
12 On The Move: Data Center 1.5
13 IT Is Transforming Now!
14 Dominant Logic Under Pressure
15 Away From The Dominant Logic
16 A New Industrial Model
17 Data Center 2.0

Datacenters: The Need For A Monitoring Framework

For a proper usage and collaboration between BMS, DCIM, CMDB, etc. the usage of an architectural framework is recommended.

CONTEXT

A datacenter is basically a value stack. A supply chain of stack elements where each element is a service component (People, Process and Technology that adds up to an  service). For each element in the stack the IT organization has to assure the quality as agreed on. In essence these quality attributes were performance/capacity, availability/continuity, confidentiality/integrity, and compliance. And nowadays also sustainability. One of the greatest challenges for the IT organization was and is to coherently manage these quality attributes for the complete service stack or supply chain.

Currently a mixture of management systems is used to manage the datacenter service stack: BMS, DCIM, CMDB, and System & Network Management Systems.

GETTING RID OF THE SILOES

As explained in “Datacenters: blending BIM, DCIM, CMDB, etc.” we are still talking about working in silos where each of the participants that is involved in the life cycle of the Datacenter is using its own information sets and systems. To achieve real general improvements (instead of local optimizing successes) a better collaboration and information exchange between the different participants is needed.

FRAMEWORK

To steer and control the datacenter usage successfully a monitoring system should be in place to get this done. Accepting the fact that the participants are using different systems we have to find a way to improve the collaboration and information exchange between the systems. There for we need some kind of reference, an architectural framework.

For designing an efficient monitoring framework, it is important to assemble a coherent system of functional building blocks or service components. Loose coupling and strong cohesion, encapsulation and the use of Facade and Model–View–Controller (MVC) patterns is strongly wanted because of the many proprietary solutions that are involved.

BUILDING BLOCKS

Based on an earlier blog about energy monitoring a short description of the most common building blocks will be given:

  • Most vendors have their own proprietary API’s  to interface with the metering devices. Because metering differ within and between data centers these differences should be encapsulated in standard ‘Facility usage services‘. Services for the primary, secondary and tertiary power supply and usage, the cooling, the air handling.
  • For the IT infrastructure (servers, storage and network components) usage we got the same kind of issues. So the same receipt, encapsulation of proprietary API’s in standard ‘IT usage services‘, must be used.
  • Environmental conditions outside the data center, the weather, has its influences on the the data center so proper information about this must be available by a dedicated Outdoor service component.
  • For a specific data center a DC Usage Service Bus must be available to have a common interface for exchanging usage information with reporting systems.
  • The DC Data Store is a repository (Operational Data Store or Dataware House) for datacenter usage data across data centers.
  • The Configuration management database(s) (CMDB) is a repository with the system configuration information of the Facility Infrastructure and the IT infrastructure of the data centers.
  • The Manufactures specification databases stores specifications/claims of components as provided by the manufactures.
  • The IT capacity database stores the available capacity (processing power and storage) size that is available for a certain time frame.
  • The IT workload database stores the workload (processing power and storage) size that must be processed in a certain time frame.
  • The DC Policy Base is a repository with all the policies, rules, targets and thresholds about the datacenter usage.
  • The Enterprise DC Usage Service Bus must be available to have a common interface for exchanging policies, workload capacity, CMDB, manufacturer’s  and usage information of the involved data centers, with reporting systems.
  • The Composite services deliver different views and reports of the energy usage by assembling information from the different basic services by means of the Enterprise Bus.
  • The DC Usage Portal is the presentation layer for the different stakeholders that want to know something about the usage of the Datacenter.

 DC Monitoring Framework

ARCHITECTURE APPROACH

Usage of an architectural framework (reference architecture) is a must to get a monitoring environment working. The modular approach focussed on standard interfaces gives the opportunity of “rip and replace” of components. It also gives the possibility to extend the framework with other service components. The service bus provides a standard exchange of data (based on messages) between the applications and prevents the making of dedicated, proprietary point to point communication channels. Also to get this framework working a standard data model is mandatory.

Datacenters: blending BIM, DCIM, CMDB, etc.

How to manage the life cycle of a datacenter with a rapidly changing environment and where so many stakeholders are involved?

Context

A datacenter is a very special place where three different worlds and groups of people meet – there is the facility group whose focus is on the building, there is the IT infrastructure group focused on the IT equipment housed within it, and there is the IT applications group focused on the applications that runs on the IT equipment. All with different objectives and incentives.

This worked fine when changes were highly predictable and changes came relative slow. But times have changed. Business demands drive the usage of datacenters and these demands have changed; large dynamic data volumes, stringent service-level demands, ever-higher application availability requirements and changing environmental requirements must be accommodated more swiftly then ever.

Business demands and rapidly advancing information technology have led to constant replacement of IT infrastructure. This pace of replacement of IT infrastructure is not in sync with the changes of the site infrastructure. The components of power, cooling, air handling last a longer time (10 years) than IT infrastructure (two to five years). The site infrastructure often ends up being mismatched with the facility demands of the IT infrastructure. While technically feasible, changing site infrastructure in current operational data centers may not always make sense. For some data centers, the cost savings not justify the cost for renewing the site infrastructure. For other data centers, the criticality of their function to the business just prohibits downtime and inhibits facility managers from making major overhauls to realise improvements. This makes it difficult to continually optimise data centers in such a rapidly changing environment.

IT Management

One of the most significant challenges for the IT organisation was and is to coherently manage the quality attributes for the complete IT service stack or IT supply chain (including the facility / site infrastructure).

The IT department already tried to manage the IT environment with System & Network Management Systems and Configuration Management Data Bases (CMDB’s). Where the Facility department is using Building Management Systems (BMS) in monitoring and controlling the equipment in an entire building. Until recently there was a disconnect between the facility and IT infrastructure. To get rid of the limited visibility and control of the physical layer of the data center we see the rise of a new kind of system: the Data Center Infrastructure Management System (DCIM).

But there is still another gap to be bridged. The power and cooling capacity and resources of a data center are already largely set by the original MEP (Mechanical Electrical Plumbing) design and data center location choice. The Facility/MEP design sets an ‘invisible’ boundary for IT infrastructure. And just as in the IT world, in the Facility world  there is knowledge and information loss between the design, build and production/operation phase.

Knowledge Gaps

BIM

To solve this issue, the Facility world is using more and more Building Information Model systems (BIM). BIM is a model-centric repository that support the business process of planning, designing, building and maintaining a building. In other words a system to facilitate coordination, communication, analysis and simulation, project management and collaboration, asset management, maintenance and operations throughout the building life cycle.

The transition to a BIM-centric design approach fundamentally changes the Architecture, Engineer, Contractor (AEC) process and workflow by the way project information is shared, coordinated, and reviewed. But it is also extending the workflow by integrating with one of the most important players in the AEC workflow; the operators.

Dynamic information about the building, such as sensor measurements and control signals from the building systems, can be incorporated within BIM to support analysis of building operation and maintenance.

Working in Silos

Although some local improvements, in sharing information, are and can be made with BIM, DCIM, CMDB and System & Network Management Systems we are still talking about working in silos. The different participants that are involved in the life cycle of the Datacenter are using their own information sets and systems. This is a repeating process, from the owner tot the architect to the design team to the construction manager, the contractor to the subcontractors, to the different operators and, ultimately, back to the owner.

Integrated processes and life cycle management

If we want to achieve general improvements during the complete life cycle of the data center based on key performance indicators (KPI) such as Cost, Quality, On-time delivery, Productivity, Availability, and Energy efficiency a better collaboration and information exchange between the different participants is needed.

BIM, BMS, DCIM, CMDB and System & Network Management Systems do have an overlap in scope but also have their own focus: life cycle, static and dynamic status information of facility, IT infrastructure and software components.

Silo Buster

We all know that one size fits all doesn’t work and/or is not flexible enough. So what is needed is collaboration and interoperability, getting rid of the silo approach by focussing on the exchange of information between these different systems. There is a need for modular designed management systems with open API’s so that customers/users can make their own choice on which job is done by which system and still have the opportunity of an easy exchange of information (retrieval or feed).

This will revolutionizes the way Data center information is shared, coordinated, and reviewed and will affect workflows, delivery methods, and deliverables in a possitive way.

Energy efficient Data Center ≠ Green Data Center

GreenDC“Many IT companies are simply  choosing to attach their modern information factories to some of the dirtiest sources of electricity, supplied by some of the dirtiest utilities on the planet” says Greenpeace.

In their latest reportHow green is your cloud?’, Greenpeace has criticized the cloud computing industry saying that cloud providers “are all rapidly expanding without adequate regard to source of electricity, and rely heavily on dirty energy to power their clouds.”

In response to the report, Urs Hoelzle Google’s Senior Vice President for Technical Infrastructure in a statement published in the New York Times said that: The company welcomed the Greenpeace report and believed that it would intensify the industry’s focus on renewable energy.  Where as Apple and Amazon raises questions about the credibility of the estimates in the Greenpeace report, and illustrates the difficulty of seeking to estimate data center power usage

The discussion on how accurate and valid the estimates in the report are is indeed important but the real problem which is addressed shouldn’t be missed and that is; if an energy efficient data center  equals an green data center.

In the data center world the last two years much attention is given to the usage of PUE as a KPI for energy efficiency. PUE has served its goal to get a rough indication of the energy efficiency of a data center but it has its limitations. For example if servers that were not being used are shut down this can lead to a bigger PUE. But the biggest flaw, from a green IT perspective, in using the PUE is that there is no relation to the Carbon emission based on the electricity that is being used.

This CO2 emission flaw was already addressed in 2010 by Eirikur Hrafnsson CEO  of  Greenqloud (see blog Greenqloud propose green PUE ) and later by the Green Grid with a white paper on the Carbon Usage Effectiveness (CUE) metric.

Energy efficiency and carbon emission are mixed-up. You can be very energy inefficient and still have zero carbon emission and vice versa you can be very energy-efficient and still have a large carbon emission. Therefore cloud providers and data center operators must look not only at how efficiently they are using electricity, but also the sources of electricity that they are choosing, a detail that many companies are unwilling to disclose on their own. The energy sources of the power grid isn’t the only energy issue. A lot of data centers are using diesel generators as back up or are using generators because of  sub-standard or non-existent grid connection. The claim of green Cloud Computing services and green data centers can only be proven if the providers are more transparent about their energy sources.

If operators neglect carbon emission, dont care because it are external costs, it is “out of scope” for their company or they feel they are not directly responsible, it will come back to them like a boomerang. Resilience to a changing climate demands for decarburization of the energy sources we are using to ensure sustainability. If carbon emission wont be reduced, the government will use rigorous policy instruments to charge for this external costs.

As rightly stated in the Greenpeace report, to create a  more sustainable data center there are several steps that can be taken:

  • Power purchase agreements for renewable energy; Many operators s are recognizing that their influence and market power give them the opportunity and responsibility to demand clean energy investments. Operators can take charge of their electricity supply chain by signing long-term contracts to buy renewable electricity from a specific source through a utility or renewable energy developer via a power purchase agreement (PPA) to help drive renewable electricity onto the grid.
  • Onsite renewable energy;  Operators can install renewable energy on site to generate power for their own operations. For many facilities however, it may be difficult technically or economically to power a significant portion of a large data center with on-site renewable energy. This of course depends on the scale of the facility and the available renewable resources. However, companies are increasingly exploring onsite investments that can help provide better protection against electricity price volatility and, with onsite solar, can help shave electricity demand at the most expensive time of the day. In one of his latest blogs Christian Belady, general manager Data Center Services Microsoft, goes one step further. He raised the question  “Why do data centers need to be connected to a dirty, expensive, unreliable electrical grid?” and gave the answer: “They don’t and they don’t want to be either. Integrating a data center directly into the power plant — what we are calling our Data Plant program — will allow a data center to pick its sustainable fuel source and shield itself from grid volatility.”
  • Location strategy for renewable energy; The current and projected supply of clean electricity varies significantly between nations and regions, and renewable energy growth is largely determined by the energy and investment policies in those places and how the utilities and electricity markets are structured. Given the scale and long-lived nature of data centers, in order to ensure that the supply of clean energy can keep pace with IT’s rapidly growing demand, companies need to make a corporate commitment to engage in energy policy decisions in regions where they establish operations.

Knowing by measuring, managing by knowing

There has been a notable absence of  CUE reporting under companies. An important issue that has to be solved is that a lot of companies don’t know their carbon emission because they don’t measure it. And if you can not measure it, you can not improve it. Proper measures can only be made if there is a clear understanding of the problem. Therefore operators must begin with monitoring and reporting the carbon intensity of their data centers under the ‘new’ Carbon Utilization Effectiveness (CUE) standard.

UPDATE  May 5th

The Green Grid made an official response to the Greenpeace report in short The Green Grid is stating that:

Any study or initiative that raises awareness around the important issues of reducing emissions and increasing energy efficiency and sustainability in the data center and cloud computing sectors is something that The Green Grid supports”  and “We welcome the public interest Greenpeace has generated around this report and also encourage the IT industry to think about the complex idea of a ‘green data center’ in a holistic manner. By properly leveraging metrics like PUE, CUE and WUE alongside other models like The Green Grid’s Data Center Maturity Model, organizations can better understand the broader picture of their energy ecosystems and take steps to become more efficient and sustainable both inside and outside of the data center. Similarly, we encourage organizations of all sizes to actively participate in this important conversation by becoming members The Green Grid.

Data centers are expected to consume 19% more energy

The world’s data centers are expected to consume 19% more energy in the next 12 months than they have in the past year, according to results of a global industry census conducted by DatacenterDynamics (DCD). An interesting conclusion in the light of the report Jonathan Koomey released (new study) on data center electricity use in 2010. Which was a follow-up of the 2008 article: “Worldwide electricity used in data centers.

The 2007 EPA report to Congress on data centers (US EPA 2007) predicted a little less than a doubling in total data center electricity use from 2005 to 2010 if historical trends continued. But instead of this, In the U.S., the electricity used by data centers from 2005 to 2010 increased about 36 percent instead of doubling. And worldwide electricity consumption by data centers increased about 56 percent from 2005 to 2010 instead of doubling.

With the DCD forecast of a 19% energy growth the next 12 months it looks we are back on track again.

Data centers currently consume about 31GW, the census concludes. The average total power to rack is about 4.05kW, with 58% of racks consuming 5kW per rack, 28% consuming from 5kW to 10kW per rack and the rest consuming more than 10kW per rack.

Because energy demand is expected to rise so much, data center owners and operators are concerned about energy cost and availability. Analysis of the census data concluded that energy cost and availability is the number-one concern for them.

  • 44% believe that increased energy costs will impact significantly on their data center operations in the next 12 months – this is the highest ranked issue
  • 29% are concerned about the significant impact of energy availability (or the lack of it).

Energy concerns (c) DatacenterDynamics Global Industry Census 2011

Data center monitoring is directed by the priority to maintain availability (56%), reducing costs (31%) and reducing environmental impact scored 13%. According to DCD monitoring of energy efficiency is only conducted continuously by a minority of 42% although an equivalent proportion monitor it less regularly. This pattern is repeated for carbon emissions and is consistent with a lower priority given to the environmental impact of the data center.

Energy monitoring (c) DatacenterDynamics Global Industry Census 2011

Nevertheless these concerns big data centers are still being build in areas (for example London and Amsterdam) were lack of power supply has been touted as a supply constraining issue for years.

For example in the London arena:

  • Telehouse West, opened last March, 7.5MW of new capacity.
  • Telecity Harbour Exchange, 6MW opening in 2 phases.

And in the Amsterdam arena:

  • Switch 8.320 m2
  • Equinix  AM3 (in two phases 6400 m2 )
  • Terremark 2800 m2 first phase (10.000 m2 additional)

How can we explain these activities if power is in such tight supply?