Virtualization Executive Summit Event

Last week I attended (and presented at) the Virtualization Executive Summit that was held at 26-27 April 2010 in the Netherlands and organized by Tech:Touchstone. Now in its fourth edition, this European summit is for senior IT executives, analysts and vendors to network, discuss and learn about the latest developments in virtualization technology. A great event attended by  Senior IT Executives from all sorts of organizations, with a large proportion of the end-user interest being in Desktop Virtualization and Data center Virtualization. It was a very well-organized and thought-out event, with each of the delegates having individual time tables based on their areas of interest.

From my observation there were three recurring theme’s in the presentations and the discussions:

  • the issue of available knowledge and cooperation between the different groups of technicians that are involved by building and maintaining data centers
  • the need and usage of standard building blocks when building and operating a data center
  • what do you get when virtualizing your infrastructure

The communication and collaboration between the site, storage, network and servers (flavor Unix and flavor Windows) designers and engineers are subject to improvement was the general feeling. It should not only be a wish in fact it is mandatory if you want to get a consistent and coherent data center infrastructure to deliver as promised or to modernize your services to things like Infrastructure as a Service or a Platform as a Service (IaaS/PaaS).

This issue fits very well with an article in eWeek Europe. In a survey of the webinar’s audience, 50 percent said their main source of efficiency ideas is their own internal experts, and only 23 percent said they would look first at the EU Code of Conduct for Data Centre Efficiency.

“Although there are many good ways to improve the efficiency of data centres, most operators are relying very heavily on their own internal knowledge and on the way it has always been done, according to presenters in Efficiency in Data Center Design a webinar chaired by eWEEK Europe, as part of BrightTalk’s Efficient Data Center summit.”

The designing attitude of the data center operators and underneath the site, storage, network and servers groups is ‘acting in splendid isolation’. The introduction of virtualization technology puts a lot of pressure on these development and maintenance groups to change this attitude because the way it has always been done does not work anymore. New knowledge must be rapidly build up and new ideas and solutions has to be found for measuring efficientcy and effectiveness. Also the traditional responsibility and accountability is subject to change because of this introduction of virtualization technology. Who is responsible for what if everything is virtualized?

The interdependency between site, network, storage and server infrastructures are even such that, although the usage of standard building blocks can help, there must be a more holistic approach of designing and maintaining a data center. The data center is one, complex, system and should be approached as such. This isn’t a new idea. Already in 1998 Steve Traugott made a presentation, Bootstrapping an Infrastructure, at the Twelfth LISA conference  about treating an infrastructure as a single large distributed virtual machine. A spin-off  for this way of thinking can be found at  Infrastructures Org. A more recent initiative is from Data Center Pulse (DCP), a non-profit, data center industry community founded on the principles of sharing best practices among its membership, that is currently working on Standardized Data Center Stack Framework Proposal. The goals of the Stack are:

  • Treat the data center as a common system which can be easily described and measured.
  • Provide a common framework to describe, communicate, and innovate data center thinking between owner/operators peers and the industry.

With siloed decision-making, the measurement and accountability issues and the absence of true usage and cost analysis, inefficiency become the rule. And then the promise of virtualization: becoming more flexible and therefore becoming more effective and efficient wont hold and/or can’t be justified.

Also for true usage and cost analysis you get the feeling about “the not invented here” and/or “re-inventing the wheel” syndrome. There is another initiative that is tackling the analysis issue: the open source Open Data Center Measure of Efficiency (OpenDCME). In this model 16 KPIs that span the data center are used to measure data center efficiency. As stated “This first version of the OpenDCME model is based on, among others, the EU Code of Conduct for Data Centres best practices in combination with the feedback of applying the model to a large number of data centers.” Mansystems , a European IT specialist in service management, consultancy & support, created and released OpenDCME.

The observed issues are already picked up in the market by initiatives like the Stack of  DatacenterPulse and OpenDCME. But there are also technical solutions: automation of the IT infrastructure services and delivery by means of orchestration. Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services as part of the ‘Dynamic Data center’. These workflow kind of solutions should make operating a data center much easier. But the funny thing is that although there was a consistent sound and articulation of  issues at this summit I didn’t hear anything about orchestration. It look likes that in a lot of organizations large-scale virtualization is still for the early adopters and orchestration is just a step to far at this moment. And at the end it are not the tools, methods or solutions that are making the difference but people who are effectively communicating and collaborating.

Bookmark and Share

Unifying ideas and initiatives: Data Center Stack Framework & OpenDCME

The current indexes for data center performance, such as DCiE, EUE and PUE are not sufficient to drive data center efficiency. These indexes focus only on the power or energy consumption of the facilities. Each metric in itself says nothing about how efficient a data center really is. In order to drive and improve efficiency, a common framework that will describe any data center, anywhere, doing anything is required. The next step is to apply industry established metrics for each block that is running in the data center. The combination of a framework and the metrics can form the basis of real data center performance monitoring.

And here come two things together.

Data Center Pulse (DCP), a non-profit, data center industry community founded on the principles of sharing best practices among its membership is working on Standardized Data Center Stack Framework Proposal The goals of the Stack are: Treat the data center as a common system which can be easily described and measured Provide a common framework to describe, communicate, and innovate data center thinking between owner/operators peers and the industry.  So the aim is simple – provide one common framework that will describe any data center, anywhere, doing anything. The next step is to apply industry established metrics for each block that is running in the data center.

Datacenter Pulse Stack Framework

Datacenter Pulse Stack Framework

Another initiative is the open source Open Data Center Measure of Efficiency (OpenDCME). In this model 16 KPIs that span the data center are used to measure data center efficiency. As stated “This first version of the OpenDCME model is based on, amongst others, the EU Code of Conduct for Data Centres best practices in combination with the feedback of applying the model to a large number of data centers.” Mansystems , a European IT specialist in service management, consultancy & support, created and released OpenDCME. The proposed measures belongs to the community and is open for contribution by using the Creative Commons license agreement. The model consists of four domains:

  1. the IT assets that are located in the data center,
  2. the IT assets efficiency
  3. the Availability, Performance and Capacity of the IT assets,
  4. the efficiency of data center IT processes.

The radar plot shown below is the presentation of the 4 domains and the 16 KPIs (4 per topic). The OpenDCME model, in its current version, does not tell you HOW to measure the 16 KPIs.

OpenDCME model

OpenDCME model

Comparing the Stack Framework and the OpenDCME model initiatives you can see that both are complimentary to each other. Bringing these to initiatives together can accelerate the development of performance monitoring and management of data centers.

Lets see what happens …

Bookmark and Share