Preview Data Center 2.0 – The Sustainable Data Center

Data Center 2.0: The Sustainable Data Center is an in-depth look into the steps needed toData Center 2.0 transform modern-day data centers into sustainable entities.

To get an impression of the book you can read the prologue right here.

Prologue

In large parts of the world, computers, Internet services, mobile communication, and cloud computing have become a part of our daily lives, professional and private. Information and communication technology has invaded our life and is recognized as a crucial enabler of economic and social activities across all sectors of our society. The opportunity of anytime, anywhere being connected to communicate and interact and to exchange data is changing the world.

During the last two decades, a digital information infrastructure has been created whose functioning is critical to our society, governmental, and business processes and services, which depend on computers. Data centers, buildings to house computer servers along with network and storage systems, are a crucial part of this critical digital infrastructure. They are the physical manifestation of the digital economy and the virtual and digital information infrastructure, were data is processed, stored, and transmitted.

A data center is a very peculiar and special place. It is the place were different worlds meet each other. It is a place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT, and energy worlds come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.

Electricity is the foundation of all digital information processing and digital services that are mostly provided from data centers. The quality and availability of the data center stands or falls with the quality and availability of the power supply to the data center.

For data centers, the observation is made that the annualized costs of power-related infrastructure has, in some cases, grown to equal the annualized capital costs of the IT equipment itself. Data centers have reached the point that the electricity costs of a server over its lifetime will equal or pass the price of the hardware. Also, it is estimated that data centers are responsible for about 2% of the total world electricity consumption.

It is therefore easy to understand why the topic of electricity usage of data centers is a subject of discussion.

Electricity is still mostly generated with fossil fuel-based primary energy resources such as coal, gas, and oil. But this carbon-constrained power sector is under pressure. Resilience to a changing climate makes the decarburization of these energy sources mandatory to ensure sustainability.

From different parts of society the sustainability of data centers is questioned. Energy efficiency and indirect CO2 emissions caused by the consumption of carbon-based electricity are criticized.

The data center industry is working hard on these issues. According to the common view, it comes down to implementing technical measures. The idea is that more efficient power usage of servers, storage and network components, improved utilization, and better power and cooling management in data centers will solve the problems.

This idea can be questioned. Data centers are part of complex supply chains and have many stakeholders with differing perspectives, incomplete, contradictory, and changing requirements and complex interdependencies. In this situation there is no simple, clear definition of data center efficiency, and there is no simple right or optimal solution.

According to the Brundtland Commision of the United Nations, sustainability is “to meet the needs of the present without compromising the ability of future generations to meet their own needs.”

Given the fact that we are living in a world with limited resources and the demand for digital infrastructure is growing exponentially, there will be limits that will be encountered. The limiting factor to future economic development is the availability and the functioning of natural capital. Therefore, we need a new and better industrial model.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.

A sustainable data center should be environmentally viable, economically equitable, and socially bearable.

This book takes a conceptual approach to the subject of data centers and sustainability. The proposition of the book is that we must fundamentally rethink the “data center equation” of “people, planet, profit” in order to become sustainable.

The scope of this search goes beyond the walls of the data center itself. Given the great potential of information technology to transform today’s society into one characterized by sustainability what is the position of data centers?

The data center is the place where it all comes together: energy, IT, and societal demands and needs.

Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy.

The idea is that a sustainable data center is based on economics, organization, people and technology. This book offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

Creating a sustainable data center calls for a multidisciplinary approach and for different views and perspectives in order to obtain a good understanding of what is at stake.

The solution is, at the end of the day, a question of commitment.

Virtualization Executive Summit Event

Last week I attended (and presented at) the Virtualization Executive Summit that was held at 26-27 April 2010 in the Netherlands and organized by Tech:Touchstone. Now in its fourth edition, this European summit is for senior IT executives, analysts and vendors to network, discuss and learn about the latest developments in virtualization technology. A great event attended by  Senior IT Executives from all sorts of organizations, with a large proportion of the end-user interest being in Desktop Virtualization and Data center Virtualization. It was a very well-organized and thought-out event, with each of the delegates having individual time tables based on their areas of interest.

From my observation there were three recurring theme’s in the presentations and the discussions:

  • the issue of available knowledge and cooperation between the different groups of technicians that are involved by building and maintaining data centers
  • the need and usage of standard building blocks when building and operating a data center
  • what do you get when virtualizing your infrastructure

The communication and collaboration between the site, storage, network and servers (flavor Unix and flavor Windows) designers and engineers are subject to improvement was the general feeling. It should not only be a wish in fact it is mandatory if you want to get a consistent and coherent data center infrastructure to deliver as promised or to modernize your services to things like Infrastructure as a Service or a Platform as a Service (IaaS/PaaS).

This issue fits very well with an article in eWeek Europe. In a survey of the webinar’s audience, 50 percent said their main source of efficiency ideas is their own internal experts, and only 23 percent said they would look first at the EU Code of Conduct for Data Centre Efficiency.

“Although there are many good ways to improve the efficiency of data centres, most operators are relying very heavily on their own internal knowledge and on the way it has always been done, according to presenters in Efficiency in Data Center Design a webinar chaired by eWEEK Europe, as part of BrightTalk’s Efficient Data Center summit.”

The designing attitude of the data center operators and underneath the site, storage, network and servers groups is ‘acting in splendid isolation’. The introduction of virtualization technology puts a lot of pressure on these development and maintenance groups to change this attitude because the way it has always been done does not work anymore. New knowledge must be rapidly build up and new ideas and solutions has to be found for measuring efficientcy and effectiveness. Also the traditional responsibility and accountability is subject to change because of this introduction of virtualization technology. Who is responsible for what if everything is virtualized?

The interdependency between site, network, storage and server infrastructures are even such that, although the usage of standard building blocks can help, there must be a more holistic approach of designing and maintaining a data center. The data center is one, complex, system and should be approached as such. This isn’t a new idea. Already in 1998 Steve Traugott made a presentation, Bootstrapping an Infrastructure, at the Twelfth LISA conference  about treating an infrastructure as a single large distributed virtual machine. A spin-off  for this way of thinking can be found at  Infrastructures Org. A more recent initiative is from Data Center Pulse (DCP), a non-profit, data center industry community founded on the principles of sharing best practices among its membership, that is currently working on Standardized Data Center Stack Framework Proposal. The goals of the Stack are:

  • Treat the data center as a common system which can be easily described and measured.
  • Provide a common framework to describe, communicate, and innovate data center thinking between owner/operators peers and the industry.

With siloed decision-making, the measurement and accountability issues and the absence of true usage and cost analysis, inefficiency become the rule. And then the promise of virtualization: becoming more flexible and therefore becoming more effective and efficient wont hold and/or can’t be justified.

Also for true usage and cost analysis you get the feeling about “the not invented here” and/or “re-inventing the wheel” syndrome. There is another initiative that is tackling the analysis issue: the open source Open Data Center Measure of Efficiency (OpenDCME). In this model 16 KPIs that span the data center are used to measure data center efficiency. As stated “This first version of the OpenDCME model is based on, among others, the EU Code of Conduct for Data Centres best practices in combination with the feedback of applying the model to a large number of data centers.” Mansystems , a European IT specialist in service management, consultancy & support, created and released OpenDCME.

The observed issues are already picked up in the market by initiatives like the Stack of  DatacenterPulse and OpenDCME. But there are also technical solutions: automation of the IT infrastructure services and delivery by means of orchestration. Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services as part of the ‘Dynamic Data center’. These workflow kind of solutions should make operating a data center much easier. But the funny thing is that although there was a consistent sound and articulation of  issues at this summit I didn’t hear anything about orchestration. It look likes that in a lot of organizations large-scale virtualization is still for the early adopters and orchestration is just a step to far at this moment. And at the end it are not the tools, methods or solutions that are making the difference but people who are effectively communicating and collaborating.

Bookmark and Share

Unifying ideas and initiatives: Data Center Stack Framework & OpenDCME

The current indexes for data center performance, such as DCiE, EUE and PUE are not sufficient to drive data center efficiency. These indexes focus only on the power or energy consumption of the facilities. Each metric in itself says nothing about how efficient a data center really is. In order to drive and improve efficiency, a common framework that will describe any data center, anywhere, doing anything is required. The next step is to apply industry established metrics for each block that is running in the data center. The combination of a framework and the metrics can form the basis of real data center performance monitoring.

And here come two things together.

Data Center Pulse (DCP), a non-profit, data center industry community founded on the principles of sharing best practices among its membership is working on Standardized Data Center Stack Framework Proposal The goals of the Stack are: Treat the data center as a common system which can be easily described and measured Provide a common framework to describe, communicate, and innovate data center thinking between owner/operators peers and the industry.  So the aim is simple – provide one common framework that will describe any data center, anywhere, doing anything. The next step is to apply industry established metrics for each block that is running in the data center.

Datacenter Pulse Stack Framework

Datacenter Pulse Stack Framework

Another initiative is the open source Open Data Center Measure of Efficiency (OpenDCME). In this model 16 KPIs that span the data center are used to measure data center efficiency. As stated “This first version of the OpenDCME model is based on, amongst others, the EU Code of Conduct for Data Centres best practices in combination with the feedback of applying the model to a large number of data centers.” Mansystems , a European IT specialist in service management, consultancy & support, created and released OpenDCME. The proposed measures belongs to the community and is open for contribution by using the Creative Commons license agreement. The model consists of four domains:

  1. the IT assets that are located in the data center,
  2. the IT assets efficiency
  3. the Availability, Performance and Capacity of the IT assets,
  4. the efficiency of data center IT processes.

The radar plot shown below is the presentation of the 4 domains and the 16 KPIs (4 per topic). The OpenDCME model, in its current version, does not tell you HOW to measure the 16 KPIs.

OpenDCME model

OpenDCME model

Comparing the Stack Framework and the OpenDCME model initiatives you can see that both are complimentary to each other. Bringing these to initiatives together can accelerate the development of performance monitoring and management of data centers.

Lets see what happens …

Bookmark and Share