Preview Data Center 2.0 – The Sustainable Data Center

Data Center 2.0: The Sustainable Data Center is an in-depth look into the steps needed toData Center 2.0 transform modern-day data centers into sustainable entities.

To get an impression of the book you can read the prologue right here.

Prologue

In large parts of the world, computers, Internet services, mobile communication, and cloud computing have become a part of our daily lives, professional and private. Information and communication technology has invaded our life and is recognized as a crucial enabler of economic and social activities across all sectors of our society. The opportunity of anytime, anywhere being connected to communicate and interact and to exchange data is changing the world.

During the last two decades, a digital information infrastructure has been created whose functioning is critical to our society, governmental, and business processes and services, which depend on computers. Data centers, buildings to house computer servers along with network and storage systems, are a crucial part of this critical digital infrastructure. They are the physical manifestation of the digital economy and the virtual and digital information infrastructure, were data is processed, stored, and transmitted.

A data center is a very peculiar and special place. It is the place were different worlds meet each other. It is a place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT, and energy worlds come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.

Electricity is the foundation of all digital information processing and digital services that are mostly provided from data centers. The quality and availability of the data center stands or falls with the quality and availability of the power supply to the data center.

For data centers, the observation is made that the annualized costs of power-related infrastructure has, in some cases, grown to equal the annualized capital costs of the IT equipment itself. Data centers have reached the point that the electricity costs of a server over its lifetime will equal or pass the price of the hardware. Also, it is estimated that data centers are responsible for about 2% of the total world electricity consumption.

It is therefore easy to understand why the topic of electricity usage of data centers is a subject of discussion.

Electricity is still mostly generated with fossil fuel-based primary energy resources such as coal, gas, and oil. But this carbon-constrained power sector is under pressure. Resilience to a changing climate makes the decarburization of these energy sources mandatory to ensure sustainability.

From different parts of society the sustainability of data centers is questioned. Energy efficiency and indirect CO2 emissions caused by the consumption of carbon-based electricity are criticized.

The data center industry is working hard on these issues. According to the common view, it comes down to implementing technical measures. The idea is that more efficient power usage of servers, storage and network components, improved utilization, and better power and cooling management in data centers will solve the problems.

This idea can be questioned. Data centers are part of complex supply chains and have many stakeholders with differing perspectives, incomplete, contradictory, and changing requirements and complex interdependencies. In this situation there is no simple, clear definition of data center efficiency, and there is no simple right or optimal solution.

According to the Brundtland Commision of the United Nations, sustainability is “to meet the needs of the present without compromising the ability of future generations to meet their own needs.”

Given the fact that we are living in a world with limited resources and the demand for digital infrastructure is growing exponentially, there will be limits that will be encountered. The limiting factor to future economic development is the availability and the functioning of natural capital. Therefore, we need a new and better industrial model.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.

A sustainable data center should be environmentally viable, economically equitable, and socially bearable.

This book takes a conceptual approach to the subject of data centers and sustainability. The proposition of the book is that we must fundamentally rethink the “data center equation” of “people, planet, profit” in order to become sustainable.

The scope of this search goes beyond the walls of the data center itself. Given the great potential of information technology to transform today’s society into one characterized by sustainability what is the position of data centers?

The data center is the place where it all comes together: energy, IT, and societal demands and needs.

Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy.

The idea is that a sustainable data center is based on economics, organization, people and technology. This book offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

Creating a sustainable data center calls for a multidisciplinary approach and for different views and perspectives in order to obtain a good understanding of what is at stake.

The solution is, at the end of the day, a question of commitment.

Datacenters need another perspective on security

As stated by Intel “Changing demands for bandwidth, processing power, energy efficiency and storage – brought on by such trends as cloud computing, big data, increased services and more mobile computing devices hitting the network – are driving the need for new architectures in the data center.”

Therefore we see that the datacenter world is making a transition from an artisanal mode of operation to an industrialized mode of operations. To make the industrialization of datacenters possible there is a need for uniformization, standardization, and automation to get the benefits of economy of scale.  One of the current big things in this datacenter transformation is DCIM.

Until recently there was a disconnect between the facility and IT infrastructure in the datacenter. To get rid of the limited visibility and control of the physical layer of the data center we see the rise of a new kind of system: the Data Center Infrastructure Management System (DCIM).

You could say that a DCIM system is the man in middle, a broker between the demands of the IT world and the supply of power, cooling, etc. from the Facility world. The DCIM is layered on top of the so called SCADA system. Where SCADA stands for Supervisory Control And Data Acquisition, the computerized control systems that are the heart of modern industrial automation and control systems.

So currently DCIM is a hot topic, and the added value of the different kind of flavors and implementations of DCIM systems are heavily discussed.

But something is missing. The world, moves rapidly towards the digital age, whereSCADASecurity information technology forms a crucial aspect of most organizational operations around the world. Where datacenters provide the very foundation of the IT services that are provided. Therefore datacenters can be considered as a critical infrastructure, assets that are essential for the functioning of a society and economy. But how are these assets protected? And here we are not talking about the physical security of a datacenter or how save is your business data stored and processed in a datacenter. Here we are talking about the security of the facility control systems, the cooling, the power, etc.

Beware that DCIM functionality is not only about passive monitoring and dashboards but also about active controlling and automation. The information obtained with SCADA systems will become crucial to control the infrastructure sides of facilities and even IT equipment. With DCIM the traditional standalone SCADA and Building Management Systems (BMS) get connected and integrated with the IP networks and IT systems. But also the other way around, SCADA and BMS get accessible by means of these IP networks and IT systems. This, by misusing these IP networks and IT systems, creates the risk of a (partial) denial of service or damaged data integrity of your DCIM and SCADA/BMS systems and thus the disabling of a Critical Infrastructure: The Datacenter.

 In most organizations SCADA and BMS security are not yet in scope of the activities of the Corporate Information Security Officer (CISO). But awareness is growing. Although not specifically focused on datacenters the following papers are very interesting.

 From the National Institute of Standards and Technology the Guide to Industrial Control Systems Security or the  Checklist security of ICS/SCADA systems from the National Cyber Security Centre of The Netherlands or the white paper of Trend Micro

So read it, make your own checklist and get this topic on the datacenter agenda!

Datacenters: The Need For A Monitoring Framework

For a proper usage and collaboration between BMS, DCIM, CMDB, etc. the usage of an architectural framework is recommended.

CONTEXT

A datacenter is basically a value stack. A supply chain of stack elements where each element is a service component (People, Process and Technology that adds up to an  service). For each element in the stack the IT organization has to assure the quality as agreed on. In essence these quality attributes were performance/capacity, availability/continuity, confidentiality/integrity, and compliance. And nowadays also sustainability. One of the greatest challenges for the IT organization was and is to coherently manage these quality attributes for the complete service stack or supply chain.

Currently a mixture of management systems is used to manage the datacenter service stack: BMS, DCIM, CMDB, and System & Network Management Systems.

GETTING RID OF THE SILOES

As explained in “Datacenters: blending BIM, DCIM, CMDB, etc.” we are still talking about working in silos where each of the participants that is involved in the life cycle of the Datacenter is using its own information sets and systems. To achieve real general improvements (instead of local optimizing successes) a better collaboration and information exchange between the different participants is needed.

FRAMEWORK

To steer and control the datacenter usage successfully a monitoring system should be in place to get this done. Accepting the fact that the participants are using different systems we have to find a way to improve the collaboration and information exchange between the systems. There for we need some kind of reference, an architectural framework.

For designing an efficient monitoring framework, it is important to assemble a coherent system of functional building blocks or service components. Loose coupling and strong cohesion, encapsulation and the use of Facade and Model–View–Controller (MVC) patterns is strongly wanted because of the many proprietary solutions that are involved.

BUILDING BLOCKS

Based on an earlier blog about energy monitoring a short description of the most common building blocks will be given:

  • Most vendors have their own proprietary API’s  to interface with the metering devices. Because metering differ within and between data centers these differences should be encapsulated in standard ‘Facility usage services‘. Services for the primary, secondary and tertiary power supply and usage, the cooling, the air handling.
  • For the IT infrastructure (servers, storage and network components) usage we got the same kind of issues. So the same receipt, encapsulation of proprietary API’s in standard ‘IT usage services‘, must be used.
  • Environmental conditions outside the data center, the weather, has its influences on the the data center so proper information about this must be available by a dedicated Outdoor service component.
  • For a specific data center a DC Usage Service Bus must be available to have a common interface for exchanging usage information with reporting systems.
  • The DC Data Store is a repository (Operational Data Store or Dataware House) for datacenter usage data across data centers.
  • The Configuration management database(s) (CMDB) is a repository with the system configuration information of the Facility Infrastructure and the IT infrastructure of the data centers.
  • The Manufactures specification databases stores specifications/claims of components as provided by the manufactures.
  • The IT capacity database stores the available capacity (processing power and storage) size that is available for a certain time frame.
  • The IT workload database stores the workload (processing power and storage) size that must be processed in a certain time frame.
  • The DC Policy Base is a repository with all the policies, rules, targets and thresholds about the datacenter usage.
  • The Enterprise DC Usage Service Bus must be available to have a common interface for exchanging policies, workload capacity, CMDB, manufacturer’s  and usage information of the involved data centers, with reporting systems.
  • The Composite services deliver different views and reports of the energy usage by assembling information from the different basic services by means of the Enterprise Bus.
  • The DC Usage Portal is the presentation layer for the different stakeholders that want to know something about the usage of the Datacenter.

 DC Monitoring Framework

ARCHITECTURE APPROACH

Usage of an architectural framework (reference architecture) is a must to get a monitoring environment working. The modular approach focussed on standard interfaces gives the opportunity of “rip and replace” of components. It also gives the possibility to extend the framework with other service components. The service bus provides a standard exchange of data (based on messages) between the applications and prevents the making of dedicated, proprietary point to point communication channels. Also to get this framework working a standard data model is mandatory.

Datacenters: blending BIM, DCIM, CMDB, etc.

How to manage the life cycle of a datacenter with a rapidly changing environment and where so many stakeholders are involved?

Context

A datacenter is a very special place where three different worlds and groups of people meet – there is the facility group whose focus is on the building, there is the IT infrastructure group focused on the IT equipment housed within it, and there is the IT applications group focused on the applications that runs on the IT equipment. All with different objectives and incentives.

This worked fine when changes were highly predictable and changes came relative slow. But times have changed. Business demands drive the usage of datacenters and these demands have changed; large dynamic data volumes, stringent service-level demands, ever-higher application availability requirements and changing environmental requirements must be accommodated more swiftly then ever.

Business demands and rapidly advancing information technology have led to constant replacement of IT infrastructure. This pace of replacement of IT infrastructure is not in sync with the changes of the site infrastructure. The components of power, cooling, air handling last a longer time (10 years) than IT infrastructure (two to five years). The site infrastructure often ends up being mismatched with the facility demands of the IT infrastructure. While technically feasible, changing site infrastructure in current operational data centers may not always make sense. For some data centers, the cost savings not justify the cost for renewing the site infrastructure. For other data centers, the criticality of their function to the business just prohibits downtime and inhibits facility managers from making major overhauls to realise improvements. This makes it difficult to continually optimise data centers in such a rapidly changing environment.

IT Management

One of the most significant challenges for the IT organisation was and is to coherently manage the quality attributes for the complete IT service stack or IT supply chain (including the facility / site infrastructure).

The IT department already tried to manage the IT environment with System & Network Management Systems and Configuration Management Data Bases (CMDB’s). Where the Facility department is using Building Management Systems (BMS) in monitoring and controlling the equipment in an entire building. Until recently there was a disconnect between the facility and IT infrastructure. To get rid of the limited visibility and control of the physical layer of the data center we see the rise of a new kind of system: the Data Center Infrastructure Management System (DCIM).

But there is still another gap to be bridged. The power and cooling capacity and resources of a data center are already largely set by the original MEP (Mechanical Electrical Plumbing) design and data center location choice. The Facility/MEP design sets an ‘invisible’ boundary for IT infrastructure. And just as in the IT world, in the Facility world  there is knowledge and information loss between the design, build and production/operation phase.

Knowledge Gaps

BIM

To solve this issue, the Facility world is using more and more Building Information Model systems (BIM). BIM is a model-centric repository that support the business process of planning, designing, building and maintaining a building. In other words a system to facilitate coordination, communication, analysis and simulation, project management and collaboration, asset management, maintenance and operations throughout the building life cycle.

The transition to a BIM-centric design approach fundamentally changes the Architecture, Engineer, Contractor (AEC) process and workflow by the way project information is shared, coordinated, and reviewed. But it is also extending the workflow by integrating with one of the most important players in the AEC workflow; the operators.

Dynamic information about the building, such as sensor measurements and control signals from the building systems, can be incorporated within BIM to support analysis of building operation and maintenance.

Working in Silos

Although some local improvements, in sharing information, are and can be made with BIM, DCIM, CMDB and System & Network Management Systems we are still talking about working in silos. The different participants that are involved in the life cycle of the Datacenter are using their own information sets and systems. This is a repeating process, from the owner tot the architect to the design team to the construction manager, the contractor to the subcontractors, to the different operators and, ultimately, back to the owner.

Integrated processes and life cycle management

If we want to achieve general improvements during the complete life cycle of the data center based on key performance indicators (KPI) such as Cost, Quality, On-time delivery, Productivity, Availability, and Energy efficiency a better collaboration and information exchange between the different participants is needed.

BIM, BMS, DCIM, CMDB and System & Network Management Systems do have an overlap in scope but also have their own focus: life cycle, static and dynamic status information of facility, IT infrastructure and software components.

Silo Buster

We all know that one size fits all doesn’t work and/or is not flexible enough. So what is needed is collaboration and interoperability, getting rid of the silo approach by focussing on the exchange of information between these different systems. There is a need for modular designed management systems with open API’s so that customers/users can make their own choice on which job is done by which system and still have the opportunity of an easy exchange of information (retrieval or feed).

This will revolutionizes the way Data center information is shared, coordinated, and reviewed and will affect workflows, delivery methods, and deliverables in a possitive way.

The resemblance between the Power Grid and the Data Center

At the Datacentres 2012 conference in Nice, there were some very interesting discussions about the interrelation and resemblance between the power grid and the data center.

Christian Belady started the conference with a keynote speech where he raised the question; Why are we separating the power generation from the data center?

Why do we generate power in a separate power plant and struggle to get this power by transmission and distribution networks to a separate data center where data is generated by computers?

Why don’t we instead bringing the data generation (computers) to the power plant and get rid of the transmission and distribution grid?

The business case for this transformation is based on difference in price for a power grid network  per kilometer and a glass fiber network per kilometer.

Belady was emphasizing to think out of the box and to question what is really necessary to run a data center. But Belady also stated to think about using ideas and concepts from other industries. He pointed at the resemblance between managing a power grid and managing a data center in terms of variable work load, capacity management, load peak shaving etc.

That is indeed a very interesting thought.

The rise of electricity consumption is spectacular. From the seventies onwards the worldwide growth is more than 200% . The growing dynamics in supply and demand of electric energy put a lot of pressure on the current power grid. For a power grid demand and supply of power must be the same, in equilibrium, else there is the risk that this infrastructure shuts down. Loss in transmission and the level of congestion on any particular part of the grid will influence the dispatch of the generated units of electricity. For a power grid the load or the required amount of electric power falls into three categories: base load, intermediate load and peak load. Base load refers to a relatively constant output of power plants over a period. In contrast, peak load refers to surges in electricity demand that occur at specific, usually predictable periods, such as evening peak load. Finally intermediate load refers to the fluctuating demand for electricity throughout the day.

Question is, how the current power grid must handle the new demands and new dynamics real-time?

But the same can be said about the data centers and networks or “IT grid”.

The rise of data consumption is spectacular. From the eighties onwards the worldwide growth has been exponential. The growing dynamics in supply and demand of data (cloud computing) put a lot of pressure on the current IT grid. For an IT grid demand and supply of power must be the same, in equilibrium, else there is the risk that this infrastructure shuts down (time outs because of latency). Loss in transmission and the level of congestion on any particular part of the IT grid will influence the dispatch of the generated units of data. For an IT grid we also can differentiate the load or the required amount of data processing into three categories: base load, intermediate load and peak load. Base load refers to a relatively constant output of data centers over a period. In contrast, peak load refers to surges in data demand that occur at specific, usually predictable periods, such as mid day peak load. Finally intermediate load refers to the fluctuating demand for data throughout the day.

Question is, how the current IT grid must handle the new demands and new dynamics real-time?

There is the issue in the data center in how to service, provide and to organize, in an (energy) efficient way, the base load, intermediate load and peak load. The importance of capacity management is growing just as the need for control and administration. As the data center industry relies increasingly on information to operate the data center system, two infrastructures must now be managed: not only the Data Center Infrastructure, but also the Information Infrastructure for control and coordination. This need can be found back in the rising interest in topics like data center automation, data center infrastructure management (DCIM), service orchestration and management. This is also the point where the data center industry can learn from the power industry who have dealt with this issues for almost a century and now are transforming the current power grid to a Smart Grid to deal with the new demands and new dynamics.