Data Center 2.0 – The Sustainable Data Center, Now Available!

Data Center 2.0 – The Sustainable Data Center is now available. Data Center 2.0

The book is showing up on the websites of Amazon and will soon starts to pop up on websites of other  E-tailers’ .

Data Center 2.0 – The Sustainable Data Center is an in-depth look into the steps needed to transform modern-day data centers into sustainable entities.

See the press release:

Some nice endorsements were received:

“Data Center 2.0, is not so much about technology but about people, society and economic development. By helping readers understand that even if Data Centers, enabling the Digital economy, are contributing a lot to energy saving, they need to be sustainable themselves; Rien Dijkstra is on the right track. When explaining how to build sustainable Data Centers, through multi disciplinary approach, breaking the usual silos of the different expertise, Rien Dijkstra is proposing the change of behavior needed to build sustainable Data Centers. Definitely it is about people, not technology.” 

Paul-Francois Cattier, Global Senior Vice-President Data Center – Schneider Electric

“In Data Center 2.0 The Sustainable Data Center author Rien Dijkstra has gone several steps further in viewing the data center from the perspective of long term ownership and efficiency in combination with treating it as a system. It’s an excellent read with many sections that could be extracted and utilized in their own right. I highly recommend this read for IT leaders who are struggling with the questions of whether to add capacity (co-locate, buy, build, or lease) or how to create a stronger organizational ownership model for existing data center capacity. The questions get more complex every year and the risks more serious for the business. The fact that you’re making a business critical decision that must stand the test of technology and business change over 15 years is something you shouldn’t take lightly.” 

Mark Thiele, President and Founder Data Center Pulse

“Data centers used to be buildings to house computer servers along with network and storage systems, a physical manifestation of the Digital Economy. Internet of Things, the digitization of about everything in and around us, brings many profound changes. A data center is the place where it all comes together. Physical and digital life, fueled by energy and IT, economical and social demands and needs and not to forget sustainability considerations. Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy. A data center in that sense is much more than just a building for servers. It has become a new business model. Data center 2.0 is a remarkable book that describes the steps and phases to facilitate and achieve this paradigm.” 

John Post, Managing Director – Foundation Green IT Amsterdam region

Preview Data Center 2.0 – The Sustainable Data Center

Data Center 2.0: The Sustainable Data Center is an in-depth look into the steps needed toData Center 2.0 transform modern-day data centers into sustainable entities.

To get an impression of the book you can read the prologue right here.

Prologue

In large parts of the world, computers, Internet services, mobile communication, and cloud computing have become a part of our daily lives, professional and private. Information and communication technology has invaded our life and is recognized as a crucial enabler of economic and social activities across all sectors of our society. The opportunity of anytime, anywhere being connected to communicate and interact and to exchange data is changing the world.

During the last two decades, a digital information infrastructure has been created whose functioning is critical to our society, governmental, and business processes and services, which depend on computers. Data centers, buildings to house computer servers along with network and storage systems, are a crucial part of this critical digital infrastructure. They are the physical manifestation of the digital economy and the virtual and digital information infrastructure, were data is processed, stored, and transmitted.

A data center is a very peculiar and special place. It is the place were different worlds meet each other. It is a place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT, and energy worlds come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.

Electricity is the foundation of all digital information processing and digital services that are mostly provided from data centers. The quality and availability of the data center stands or falls with the quality and availability of the power supply to the data center.

For data centers, the observation is made that the annualized costs of power-related infrastructure has, in some cases, grown to equal the annualized capital costs of the IT equipment itself. Data centers have reached the point that the electricity costs of a server over its lifetime will equal or pass the price of the hardware. Also, it is estimated that data centers are responsible for about 2% of the total world electricity consumption.

It is therefore easy to understand why the topic of electricity usage of data centers is a subject of discussion.

Electricity is still mostly generated with fossil fuel-based primary energy resources such as coal, gas, and oil. But this carbon-constrained power sector is under pressure. Resilience to a changing climate makes the decarburization of these energy sources mandatory to ensure sustainability.

From different parts of society the sustainability of data centers is questioned. Energy efficiency and indirect CO2 emissions caused by the consumption of carbon-based electricity are criticized.

The data center industry is working hard on these issues. According to the common view, it comes down to implementing technical measures. The idea is that more efficient power usage of servers, storage and network components, improved utilization, and better power and cooling management in data centers will solve the problems.

This idea can be questioned. Data centers are part of complex supply chains and have many stakeholders with differing perspectives, incomplete, contradictory, and changing requirements and complex interdependencies. In this situation there is no simple, clear definition of data center efficiency, and there is no simple right or optimal solution.

According to the Brundtland Commision of the United Nations, sustainability is “to meet the needs of the present without compromising the ability of future generations to meet their own needs.”

Given the fact that we are living in a world with limited resources and the demand for digital infrastructure is growing exponentially, there will be limits that will be encountered. The limiting factor to future economic development is the availability and the functioning of natural capital. Therefore, we need a new and better industrial model.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.

A sustainable data center should be environmentally viable, economically equitable, and socially bearable.

This book takes a conceptual approach to the subject of data centers and sustainability. The proposition of the book is that we must fundamentally rethink the “data center equation” of “people, planet, profit” in order to become sustainable.

The scope of this search goes beyond the walls of the data center itself. Given the great potential of information technology to transform today’s society into one characterized by sustainability what is the position of data centers?

The data center is the place where it all comes together: energy, IT, and societal demands and needs.

Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy.

The idea is that a sustainable data center is based on economics, organization, people and technology. This book offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

Creating a sustainable data center calls for a multidisciplinary approach and for different views and perspectives in order to obtain a good understanding of what is at stake.

The solution is, at the end of the day, a question of commitment.

The as-a-Service Datacenter, a new industrial model

It is said that cloud computing is improving business agility because of the ability to rapidly and inexpensively provision technological infrastructure resources on a pay-per-use basis. So customers are urged not to buy and own hardware and software for themselves but instead they should make use of cloud computing services that are offered by the cloud computing providers.

To put it another way, what is the point of owning hardware and software? Because the only thing you want to do with it is using it at the time you need it. The cloud computing proposition of on-demand delivery on a pay-per-use basis more or less removes the necessity to possess hardware and software.

But is this XaaS wisdom, “X-as-a-Service” as preached by the cloud computing providers also used by them selves?

 Service approach

An datacenter is an assembly of software, computer servers, storage, networks and power and cooling/air handling components. With these means the cloud computing provider assembles its cloud computing services. But is there a need for these providers to own these components?

Can a datacenter and thus a cloud computing proposition be assembled by a set of software, computer servers, storage, networks and power and cooling/air handling services provided by third parties?

Go circular

The emphasis on services rather than goods is a central idea of the new industrial model, circular economy, that is now gradually taking shape.

Circular economy draws a sharp distinction between the consumption and the use of CircularEconomymaterials. It is based on a ‘functional service’ model in which manufacturers retain the ownership of their products and, where possible, act as service providers—selling the use of products, not their one-way consumption as in the current industrial model of linear economy. In this new industrial model the goal of manufacturers is shifting; selling results rather than equipment, performance and satisfaction rather than products.

Examples

An example of this new approach is Philips, the global leader in LED lighting systems who has recently closed a deal with the Washington Metropolitan Area Transit Authority (WMATA) to provide 25 car parks with a LED lighting service. Philips will monitor and maintain the lighting solution based on a lighting-as-a-service model (Pay-per-Lux model).

As expressed by Philips the implications from a business process perspective are profound. Out the window goes the traditional, linear approach to resource use: namely, extract it, use it and then dump it. Instead, management focus turns to principles such as re-manufacturing, refurbishment and reuse.

Another example is InterfaceFLOR. As part of their drive to increase the inherent level of sustainability of their business, they do not sell the carpet as a product, they lease it as a service. That is supply, install, maintain and replace the carpet.

Walk the talk

Back to the cloud computing provider. Why bothering on the life cycle management of all the components you need? Why the burden of managing the buying, installing, maintaining, replacing, decommissioning processes?

Why not doing what you preach to your customer and start using the X-as-a-Service model for your own needs?

===

See also the blog post Data centers and Mount sustainability or if you want to know more on circular economy download a free copy of the book SenSe & SuStainability from the Ellen Macarthur foundation 

=====

Sourcing IT: Cloud Computing Roadblocks

Roadblocks

Cloud computing, which is part of the widespread adoption of a Service Oriented Business approach, becomes pervasive, and is rapidly evolving with new propositions and services. Therefore organisations are faced with the question how these various cloud propositions from different providers will work together to meet business objectives.

The latest cloud computing study of 451 Research showed some interesting key findings:

  1. Sixty percent of respondents view cloud computing as a natural evolution of IT service delivery and do not allocate separate budgets for cloud computing projects.
  2. Despite the increased cloud computing activity, 83% of respondents are facing significant roadblocks to deploying their cloud computing initiatives, a 9% increase since the end of 2012. IT roadblocks have declined to 15% while non-IT roadblocks have increased to 68% of the sample, mostly related to people, processes, politics and other organizational issues.
  3. Consistent with many other enterprise cloud computing surveys, security is the biggest pain point and roadblock to cloud computing adoption (30%). Migration and integration of legacy and on-premise systems with cloud applications (18%) is second, lack of internal process (18%) is third, and lack of internal resources/expertise (17%) is fourth.

It looks like that many organizations believe in a fluent evolution of their current IT infrastructure towards a cloud computing environment. Where on the other hand, right now, organisations are facing significant roadblocks.

Remarkably in the top four of roadblocks that are mentioned in this study, one very important roadblock is missing.

The cloud computing service models, offers the promise of massive cost savings combined with increased IT agility based on the assumption of:

  • Delivering IT commodity services.
  • Improved IT interoperability and portability.
  • A competitive and transparent cost model on a pay-per-use basis.
  • The quiet assumption that the service provider act on behalf and in the interest of the customer.

SiloBuster2

So with cloud computing you could get rid of the traditional proprietary, costly andinflexible application silos. These traditional application silos should be replaced by an assembly of standardised cloud computing building blocks with standard interfaces that ensures interoperability.

But does the current market offer standardized cloud computing building blocks and interoperability?

Commodity

Currently the idea is that cloud computing comes in three flavors. This is based on the reference model of the NIST institute [1]:

  1. Cloud Software as a Service (SaaS); “The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).”
  2. Cloud Platform as a Service (PaaS); “The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.”
  3. Cloud Infrastructure as a Service (IaaS); “The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.”

Each standard service offering (SaaS, PaaS, IaaS) has a well-defined interface. The consequence of this is that the consumer can’t manage or control the underlying components of the platform that is provided. The platform offers the service as-is. Therefore the service is an IT commodity service, customization is by definition not possible [2].

But is this a realistic picture of the current landscape? In reality the distinction between IaaS, PaaS, and SaaS is not so clear. Providers are offering all kind of services that don’t fit well in this 3 flavor scheme. Johan den Haan, CTO of Mendix, wrote a nice blog about this topic where he propose a more detailed framework to categorize the different approaches seen on the market today.

Besides a more granular description of cloud computing services, a distinction is made between compute, storage , and networking. Which aligns very well with the distinction that can be made from a software perspective; behavior (vs. compute), state (vs. storage), and messages (vs networking). The end result is a framework with 3 columns and 6 layers as showed in the image below.

Cloud Platform Framework. Courtesy to Johan den Haan
Cloud Platform Framework. Courtesy to Johan den Haan.
  • Layer 1: The software-defined datacenter.
  • Layer 2: Deploying applications.
  • Layer 3: Deploying code.
  • Layer 4: Model/process driven deployment of code.
  • Layer 5: Orchestrating pre-defined building blocks.
  • Layer 6: Using applications.

 While layer 2 is focused on application infrastructure, layer 3 shifts the focus to code. In other words: layer 2 has binaries as input, layer 3 has code as input.

The framework shows the complexity organisations are facing when they want to make the transition to cloud computing. What kind of interfaces or API’s are offered by the different cloud providers are they standardized or proprietary? What does this means for migration and integration?

Interoperability

The chair of the IEEE Cloud Computing Initiative, Steve Diamond[3], stated that “Cloud computing today is very much akin to the nascent Internet – a disruptive technology and business model that is primed for explosive growth and rapid transformation.“ However, he warns that “without a flexible, common framework for interoperability, innovation could become stifled, leaving us with a siloed ecosystem.”

Clouds cannot yet federate and interoperate. Such federation is called the Intercloud. The concept of a cloud operated by one service provider or enterprise interoperating with a cloud operated by another provider is a powerful means of increasing the value of cloud computing to industry and users. IEEE is creating technical standards (IEEE P2302) for this interoperability.

The Intercloud architecture they are working on is analogous to the Internet architecture. There are public clouds, which are analogous to ISPs and there are private clouds, which an organization builds to serve itself (analogous to an Intranet). The Intercloud will tie all of these clouds together.

The Intercloud contains three important building blocks:

  • Intercloud Gateways; analogous to Internet routers, connects a cloud to the Intercloud.
  • Intercloud Exchanges; analogous to Internet exchanges and peering points (called brokers in the US NIST Reference Architecture) where clouds can interoperate.
  • Intercloud Roots; Services such as naming authority, trust authority, messaging, semantic directory services, and other root capabilities. The Intercloud root is not a single entity, it is a globally replicated and hierarchical system.
InterCloud Architecture. Courtesy to IEEE.
InterCloud Architecture. Courtesy to IEEE.

According to IEEE: “The technical architecture for cloud interoperability used by IEEE P2302 and the Intercloud is a next-generation Network-to-Network Interface (NNI) ‘federation’ architecture that is analogous to the federation approach used to create the international direct-distance dialing telephone system and the Internet. The federated architecture will make it possible for Intercloud-enabled clouds operated by disparate service providers or enterprises to seamlessly interconnect and interoperate via peering, roaming, and exchange (broker) techniques. Existing cloud interoperability solutions that employ a simpler, first-generation User-to-Network Interface (UNI) ‘Multicloud’ approach do not have federation capabilities and as a result the underlying clouds still function as walled gardens.”

Lock-in

The current lack of standard cloud services with non proprietary interfaces and API’s and the missing of an operational cloud standard for interoperability can cause all kinds of  lock-in situations. We can distinguish four types of lock-in [2]:

  1. Horizontal lock-in; restricted ability to replace with comparable service/product.
  2. Vertical lock-in; solution restricts choice in other levels of the value chain.
  3. Inclined lock-in; less than optimal solution is chosen because of one-stop shopping policy.
  4. Generational lock-in; solution replacement with next-generation technology is prohibitively expensive and/or technical, contractual impossible.

Developing interoperability and federation capabilities between cloud services is considered a significant accelerator of market liquidity and lock-in avoidance.

The cloud computing market is still an immature market. One implication of this is that organisations need to take a more cautious and nuanced approach to IT sourcing and their journey to the clouds.

A proper IT infrastructure valuation, based on well-defined business objectives, demand behavior, functional and technical requirements and in-depth cost analysis, is necessary to prevent nasty surprises [2].

References

[1] Mell, P. & Grance, T., 2011, ‘The NIST Definition of Cloud Computing’, NSIT Special Publication 800-145, USA

[2] Dijkstra, R., Gøtze, J., Ploeg, P.v.d. (eds.), 2013, ‘Right Sourcing – Enabling Collaboration’, ISBN 9781481792806

[3] IEEE, 2011, ’IEEE launches pioneering cloud computing initiative’,  http://standards.ieee.org/news/2011/cloud.html

Amazon Web Services Summit Amsterdam 2013

AWSAmsterdam2013logo

Last week I visit the Amazon Web Services Summit that came to Amsterdam for the first time.

The kick off of this event was made by the CTO of Amazon, Werner Vogel, with a keynoteAWSAmsterdam2013 introduction. He gave a short summary of Amazons success story of seven years young. Boasting on 34 price reductions since 2006 and the delivery of 33 services to 100000 customers in 190 countries. All based based on the circle of More AWS Usage -> More Infrastructure -> Economies of Scale -> Lower Infrastructure Costs -> Reduced Prices -> More Customers -> More AWS Usage -> and so on.

He explained that in economic terms cloud computing is offering:

  1. Elasticity; by dynamic (on-demand) provisioning of IT resources to customers, without customers having to worry for peak loads.
  2. Economy of scale; with multi-tenancy sharing of resources and costs across a large pool of customers by means of centralization of infrastructure and improve utilization and efficiency.
  3. Shorter time to market; reduction in the average time to create and deploy a new solution from weeks to minutes
  4. Different cost structure; trade capital expense for variable expense thus pricing based on real consumption (utility computing).
  5. Increase Innovation; increase the number of experiments because the very low costs of failure.

This cloud computing promise was proved with several examples from different industries.  It was very nice to see that these cloud computing features were the leading theme of the summit. Also the other presenters, a combination of Amazon Solution Architects and AWS Customer representatives, gave proof about these features without going in too much technical details.

Although the presentations were convincing the promise of massive cost savings combined with increased IT agility are based on the assumption of:

  • Improved IT interoperability and portability.
  • Delivering IT commodity services.
  • A competitive and transparent cost model on a pay-per-use basis.
  • And the quiet assumption that the service provider act on behalf and in the interest of the customer.

But are these assumptions always met? In general companies, especially larger enterprises and not start-ups, should check these assumptions when they want to outsource their infrastructure to cloud computing vendors.  Horizontal, Vertical, Inclined or Generational lock-in are lying in wait.

Steve Diamond [1], chair of the IEEE Cloud Computing Initiative, stated that “Cloud computing today is very much akin to the nascent Internet – a disruptive technology and business model that is primed for explosive growth and rapid transformation.“ However, he warns that “without a flexible, common framework for interoperability, innovation could become stifled, leaving us with a siloed ecosystem.”

Nevertheless Amazon was very persuading to live up to their promise.

Interested in reading more on sourcing issues? Take a look at the book Right Sourcing: Enabling collaboration  or the web site www.sourcing-it.org.

[1] IEEE, 2011,IEEE launches pioneering cloud computing initiative,

Cloud Computing, outsourcing your IT infrastructure?

CloudComputingAlthough IT infrastructure delivers no direct business value, for many organizations information systems are tightly interwoven within the fabric of their primary processes that creates business value.

The puzzle is how to source your IT and if Cloud Computing is the solution of this puzzle.
Presentation about this subject, following the publication of the book ‘Rightsourcing: Enabling Collaboration‘ ISBN 978-1481792806, can be found at Slideshare.

Datacenters: The Need For A Monitoring Framework

For a proper usage and collaboration between BMS, DCIM, CMDB, etc. the usage of an architectural framework is recommended.

CONTEXT

A datacenter is basically a value stack. A supply chain of stack elements where each element is a service component (People, Process and Technology that adds up to an  service). For each element in the stack the IT organization has to assure the quality as agreed on. In essence these quality attributes were performance/capacity, availability/continuity, confidentiality/integrity, and compliance. And nowadays also sustainability. One of the greatest challenges for the IT organization was and is to coherently manage these quality attributes for the complete service stack or supply chain.

Currently a mixture of management systems is used to manage the datacenter service stack: BMS, DCIM, CMDB, and System & Network Management Systems.

GETTING RID OF THE SILOES

As explained in “Datacenters: blending BIM, DCIM, CMDB, etc.” we are still talking about working in silos where each of the participants that is involved in the life cycle of the Datacenter is using its own information sets and systems. To achieve real general improvements (instead of local optimizing successes) a better collaboration and information exchange between the different participants is needed.

FRAMEWORK

To steer and control the datacenter usage successfully a monitoring system should be in place to get this done. Accepting the fact that the participants are using different systems we have to find a way to improve the collaboration and information exchange between the systems. There for we need some kind of reference, an architectural framework.

For designing an efficient monitoring framework, it is important to assemble a coherent system of functional building blocks or service components. Loose coupling and strong cohesion, encapsulation and the use of Facade and Model–View–Controller (MVC) patterns is strongly wanted because of the many proprietary solutions that are involved.

BUILDING BLOCKS

Based on an earlier blog about energy monitoring a short description of the most common building blocks will be given:

  • Most vendors have their own proprietary API’s  to interface with the metering devices. Because metering differ within and between data centers these differences should be encapsulated in standard ‘Facility usage services‘. Services for the primary, secondary and tertiary power supply and usage, the cooling, the air handling.
  • For the IT infrastructure (servers, storage and network components) usage we got the same kind of issues. So the same receipt, encapsulation of proprietary API’s in standard ‘IT usage services‘, must be used.
  • Environmental conditions outside the data center, the weather, has its influences on the the data center so proper information about this must be available by a dedicated Outdoor service component.
  • For a specific data center a DC Usage Service Bus must be available to have a common interface for exchanging usage information with reporting systems.
  • The DC Data Store is a repository (Operational Data Store or Dataware House) for datacenter usage data across data centers.
  • The Configuration management database(s) (CMDB) is a repository with the system configuration information of the Facility Infrastructure and the IT infrastructure of the data centers.
  • The Manufactures specification databases stores specifications/claims of components as provided by the manufactures.
  • The IT capacity database stores the available capacity (processing power and storage) size that is available for a certain time frame.
  • The IT workload database stores the workload (processing power and storage) size that must be processed in a certain time frame.
  • The DC Policy Base is a repository with all the policies, rules, targets and thresholds about the datacenter usage.
  • The Enterprise DC Usage Service Bus must be available to have a common interface for exchanging policies, workload capacity, CMDB, manufacturer’s  and usage information of the involved data centers, with reporting systems.
  • The Composite services deliver different views and reports of the energy usage by assembling information from the different basic services by means of the Enterprise Bus.
  • The DC Usage Portal is the presentation layer for the different stakeholders that want to know something about the usage of the Datacenter.

 DC Monitoring Framework

ARCHITECTURE APPROACH

Usage of an architectural framework (reference architecture) is a must to get a monitoring environment working. The modular approach focussed on standard interfaces gives the opportunity of “rip and replace” of components. It also gives the possibility to extend the framework with other service components. The service bus provides a standard exchange of data (based on messages) between the applications and prevents the making of dedicated, proprietary point to point communication channels. Also to get this framework working a standard data model is mandatory.