How to manage the life cycle of a datacenter with a rapidly changing environment and where so many stakeholders are involved?
A datacenter is a very special place where three different worlds and groups of people meet – there is the facility group whose focus is on the building, there is the IT infrastructure group focused on the IT equipment housed within it, and there is the IT applications group focused on the applications that runs on the IT equipment. All with different objectives and incentives.
This worked fine when changes were highly predictable and changes came relative slow. But times have changed. Business demands drive the usage of datacenters and these demands have changed; large dynamic data volumes, stringent service-level demands, ever-higher application availability requirements and changing environmental requirements must be accommodated more swiftly then ever.
Business demands and rapidly advancing information technology have led to constant replacement of IT infrastructure. This pace of replacement of IT infrastructure is not in sync with the changes of the site infrastructure. The components of power, cooling, air handling last a longer time (10 years) than IT infrastructure (two to five years). The site infrastructure often ends up being mismatched with the facility demands of the IT infrastructure. While technically feasible, changing site infrastructure in current operational data centers may not always make sense. For some data centers, the cost savings not justify the cost for renewing the site infrastructure. For other data centers, the criticality of their function to the business just prohibits downtime and inhibits facility managers from making major overhauls to realise improvements. This makes it difficult to continually optimise data centers in such a rapidly changing environment.
One of the most significant challenges for the IT organisation was and is to coherently manage the quality attributes for the complete IT service stack or IT supply chain (including the facility / site infrastructure).
The IT department already tried to manage the IT environment with System & Network Management Systems and Configuration Management Data Bases (CMDB’s). Where the Facility department is using Building Management Systems (BMS) in monitoring and controlling the equipment in an entire building. Until recently there was a disconnect between the facility and IT infrastructure. To get rid of the limited visibility and control of the physical layer of the data center we see the rise of a new kind of system: the Data Center Infrastructure Management System (DCIM).
But there is still another gap to be bridged. The power and cooling capacity and resources of a data center are already largely set by the original MEP (Mechanical Electrical Plumbing) design and data center location choice. The Facility/MEP design sets an ‘invisible’ boundary for IT infrastructure. And just as in the IT world, in the Facility world there is knowledge and information loss between the design, build and production/operation phase.
To solve this issue, the Facility world is using more and more Building Information Model systems (BIM). BIM is a model-centric repository that support the business process of planning, designing, building and maintaining a building. In other words a system to facilitate coordination, communication, analysis and simulation, project management and collaboration, asset management, maintenance and operations throughout the building life cycle.
The transition to a BIM-centric design approach fundamentally changes the Architecture, Engineer, Contractor (AEC) process and workflow by the way project information is shared, coordinated, and reviewed. But it is also extending the workflow by integrating with one of the most important players in the AEC workflow; the operators.
Dynamic information about the building, such as sensor measurements and control signals from the building systems, can be incorporated within BIM to support analysis of building operation and maintenance.
Working in Silos
Although some local improvements, in sharing information, are and can be made with BIM, DCIM, CMDB and System & Network Management Systems we are still talking about working in silos. The different participants that are involved in the life cycle of the Datacenter are using their own information sets and systems. This is a repeating process, from the owner tot the architect to the design team to the construction manager, the contractor to the subcontractors, to the different operators and, ultimately, back to the owner.
Integrated processes and life cycle management
If we want to achieve general improvements during the complete life cycle of the data center based on key performance indicators (KPI) such as Cost, Quality, On-time delivery, Productivity, Availability, and Energy efficiency a better collaboration and information exchange between the different participants is needed.
BIM, BMS, DCIM, CMDB and System & Network Management Systems do have an overlap in scope but also have their own focus: life cycle, static and dynamic status information of facility, IT infrastructure and software components.
We all know that one size fits all doesn’t work and/or is not flexible enough. So what is needed is collaboration and interoperability, getting rid of the silo approach by focussing on the exchange of information between these different systems. There is a need for modular designed management systems with open API’s so that customers/users can make their own choice on which job is done by which system and still have the opportunity of an easy exchange of information (retrieval or feed).
This will revolutionizes the way Data center information is shared, coordinated, and reviewed and will affect workflows, delivery methods, and deliverables in a possitive way.