Data Center 2.0 – The Sustainable Data Center

Currently busy with the final steps to get the forthcoming book ‘Data Center 2.0 – The Sustainable Data Center’ (ISBN 978-1499224689) published at the beginning of the summer.

Some quotes from the book:

“A data center is a very peculiar and special place. It is the place were different worlds meet each other. A place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT and energy world come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.Data Center 2.0

Given the great potential of Information Technology to transform today’s society into one characterised by sustainability what is the position of data centers?

……..

The data center is the place were it all comes together energy, IT and societal demands and needs.

…….

A sustainable data center should be environmentally viable, economically equitable, and socially bearable. To become sustainable, the data center industry must free itself from its shackles of 19th century based ideas and concepts of production. They are too simple for our 21th century world.

The combination of service-dominant logic and cradle-to-cradle makes it possible to create a sustainability data center industry.

Creating sustainable data centers is not a technical problem but an economical problem to be solved.”

The book takes a conceptual approach on the subject of data centers and sustainability. It offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

The book has already received endorsements of Paul-Francois Cattier Global Senior, Vice President Data Center of Schneider Electric and John Post, Managing Director of Foundation  Green IT Amsterdam region.

Table of contents

1 Prologue
2 Signs Of The Time
3 Data Centers, 21th Century Factories
4 Data Centers A Critical Infrastructure
5 Data Centers And The IT Supply Chain
6 The Core Processes Of A Data Center
7 Externalities
8 A Look At Data Center Management
9 Data Center Analysis
10 Data Center Monitoring and Control
11 The Willingness To Change
12 On The Move: Data Center 1.5
13 IT Is Transforming Now!
14 Dominant Logic Under Pressure
15 Away From The Dominant Logic
16 A New Industrial Model
17 Data Center 2.0

Data center CO2 emissions

There have been some debate about the source of electricity a data center is using and the CO2 emissions it is causing.

Recently some interesting figures came available by the International Energy Agency. These are the CO2 emissions per kWh electricity generation. Published in the 2013 edition of “CO2 emissions from fuel combustion – Highlights”.

It isn’t easy to find consistent and complete time series. A lot of the data that can be found is using different definitions and/or different time periods what makes it difficult to aggregate these figures. IEA has published time series for the period 1990 – 2011.

To make some comparisons a selection from different parts of the world is showed in table 1. Remarkable differences in CO2 emissions can be found. Some countries show a huge decrease of CO2/kwH emission during the period 1990 – 2011 whereas others show an increase. Also within a region the differences are considerable. Zooming in on the E.U. countries with Tier 1 data center markets; United Kingdom, France, Germany and The Netherlands, (with the DC hubs London, Paris, Frankfurt and Amsterdam) we see a CO2/kwH reduction of 34.4%, 41.9%, 21.4% and 33.4%. Differences in emissions and emissions trends are caused by different energy policies and different compositions of the power plant fleet.

Table 1. CO2 emission per kWh from electricity generation, source IEA.

  2011kg CO2 /kwH Difference 1990 -2011

%

E.U. 0.352 -21.4
United Kingdom 0.441 -34.4
France 0.061 -41.9
Germany 0.477 -21.4
The Netherlands 0.404 -33.4
Russian Federation 0.437 7.6
U.S.A. 0.503 -13.6
Canada 0.167 -14.8
Australia 0.823 0.7
Singapore 0.500 -44.9
Japan 0.497 14.3
Korea 0.545 4.8
India 0.856 5.4
China 0.764 -14.5

 

The figures that are showed are averages. The CO2 emission of a data center depends on the power plants that are really used to deliver electricity to the data center. Depending on the electricity demand the power supplier will assign different power plants. The assignment of power plants is according to their production efficiencies (short-run marginal costs of production) and capacity and this production mix will influence the CO2 emission per kwH.

CO2 emission per server

To get an impression of the CO2 emission per server in different parts of the world we making use of the report ”Estimating total power consumption by servers in the U.S. and the world” of J.G. Koomey of Stanford University, the power usage of low, mid and high range server are estimated on 180, 420, and 4800 Watt. This will lead to the figures in table 2 based on a 24 hours x 365 days usage.

Table 2. Yearly CO2 emission per server.

Kg CO2/year Low range server Medium range server High range server
E.U. 555 1295 14801
United Kingdom 695 1623 18543
France 96 224 2565
Germany 752 1755 20057
The Netherlands 637 1486 16987
Russian Federation 689 1608 18375
U.S.A. 793 1851 21150
Canada 263 614 7022
Australia 1298 3028 34606
Singapore 788 1840 21024
Japan 784 1829 20898
Korea 859 2005 22916
India 1350 3149 35993
China 1205 2811 32125

 

Data center use case

What do all these figures mean for a data center? Lets take for example a data center of 1000 servers with a PUE of 1.8. In this case we use a server mix of 95% low range, 4% mid range and 1% high range servers. Besides servers the data center will also use storage and network components. The ratio of the energy use of servers versus the energy use of storage and network components is set to 75:15:10.

We can define a worst-case scenario when electricity is created with conventional coal combustion; in that case 1kW of electricity is equivalent to 1 kg CO2 emission. For the data center in this use case, that would be an upper limit of 4957 ton CO2 per year. In reality power suppliers are using a mix of different energy sources. As we can see in table 3, the lowest emission is 302 ton and the highest emission is 4244 ton. A difference with a factor 14!

Table 3. CO2 emission of a data center.

Metric ton CO2/year Servers Storage Network Data center
E.U. 727 145 97 1745
United Kingdom 911 182 121 2186
France 126 25 17 302
Germany 985 197 131 2365
The Netherlands 835 167 111 2003
Russian Federation 903 181 120 2166
U.S.A. 1039 208 139 2494
Canada 345 69 46 828
Australia 1700 340 227 4080
Singapore 1033 207 138 2479
Japan 1027 205 137 2464
Korea 1126 225 150 2702
India 1768 354 236 4244
China 1578 316 210 3787

 

Zero emission

There is of course the alternative case of zero CO2 emissions if the electricity supply is completely based on nuclear, hydro or renewable energy. Some countries like Iceland, Norway, Sweden and Switzerland have extreme low CO2/kwH emission (1, 13, 17 and 30 gram).

 

The fundamental problem of IT and Data center e-waste

The global e-waste problem is escalating, by 2017, world volumes of end-of-life e-products is expected to be 33% higher than 2012 according to a new study by the EERecyclingSolving the E-Waste Problem (StEP) Initiative.

Based on current trends E-waste will grow from 48.9 million metric tons in 2013 to 65.4 million tons in 2017.

StEP Initiative has created an interactive map. This map has details on each country’s e-waste numbers and regional or federal rules about how to dispose of the waste.

It shows that in 2012 China and the United States topped the world’s totals in market volume of EEE and e-waste. China put the highest volume of EEE on the market in 2012 – 11.1 million tons, followed by the US at 10 million tons.

However, an e-waste per capita gives a different view on e-waste production. Here the US shows an average 29.8 kg a person. Where as China’s shows a per capita figure of 5.4 kg.

A lot of the electronic devices are IT and Telecommunications Equipment and are used by corporate consumers. So some way or another these corporate consumers are taking part in this explosive growth of e-waste.

We all know that e-waste is serious business and if not proper handled it can cause severe environmental damage and harm to human health. (see When your IT equipment dies, where does it go? )

And there is also another not so well known side of the e-waste coin. E-waste is also about wasting rare earth metals. Metals which are essential for IT equipment and are very costly to produce. (see  Rare earths, E-waste and Green IT)

So there are some moral, economical and financial incentives to stop this explosive growth of e-waste.

As stated in the Green Grid (TGG) white paper the global community is in need of a user-based metric to quantify how well a corporate consumer of IT equipment responsibly manages it once it has been used and is no longer useful to the corporate consumer.

The idea is that an organization must manage all of its material streams. When an object is obsolete  (“end of current use” (EOCU) or “end of life” (EOL)) there are three possible materials streams: reuse, recycling and waste (were waste represents material that is sent to final disposal (e.g., landfilling or incineration as treatment).

Therefore they introduced the Electronics Disposal Efficiency (EDE) Metric

EDE = Total weight of decommissioned IT equipment by known responsible entities /

            Total weight of decommissioned IT equipment

Where the reuse, recycle and waste material streams can be administrated separately.

Using this metric is a good start for creating awareness of the e-waste issue in a corporate environment but there is a fundamental problem.

E-waste is a symptom of an industrial production system inherited from the steam-driven days of the first industrial evolution 18th century. A linear, ‘Take-Make-Waste’ process where “materials are extracted from the earth’s crust, transported to manufacturing sites, used to produce products (all materials not part of end product are discarded as waste), than products are transported to users and finally, at the end-of-life, discarded as waste”.

The implicit assumption of this production system is that we have infinite resources. Now in the 21st century we should be know better, fossil fuels are limited, rare earth elements in electronic components are scarce, water is scarce. So by definition this classical way of producing is unsustainable.

In a cradle to cradle production system, all materials used in industrial or commercial processes fall into one of two categories: technical or biological nutrients. Technical nutrients are strictly limited to non-toxic, non-harmful synthetic materials that have no negative effects on the natural environment; they can be used in continuous cycles as the same product without losing their integrity or quality. In this way these materials can be used over and over.

CradleToCradle

A fundamental transition is needed. Instead of buying, consuming and wasting products one should try to buy services where products are used and recycled. In this circular economy model manufacturers retain the ownership of their products and, act as service providers—selling the use of products, not their one-way consumption as in the current industrial model of linear economy. This should be the fundamental solution to e-waste.

An utopian dream? Multinationals like Philips and InterfaceFLOR are already working with this concept by selling light-as-a-service or carpet-as-a-service and creating closed production loops.

(see Data Centers and Mount Sustainability and The as-a-Service Datacenter, a new industrial model)

The as-a-Service Datacenter, a new industrial model

It is said that cloud computing is improving business agility because of the ability to rapidly and inexpensively provision technological infrastructure resources on a pay-per-use basis. So customers are urged not to buy and own hardware and software for themselves but instead they should make use of cloud computing services that are offered by the cloud computing providers.

To put it another way, what is the point of owning hardware and software? Because the only thing you want to do with it is using it at the time you need it. The cloud computing proposition of on-demand delivery on a pay-per-use basis more or less removes the necessity to possess hardware and software.

But is this XaaS wisdom, “X-as-a-Service” as preached by the cloud computing providers also used by them selves?

 Service approach

An datacenter is an assembly of software, computer servers, storage, networks and power and cooling/air handling components. With these means the cloud computing provider assembles its cloud computing services. But is there a need for these providers to own these components?

Can a datacenter and thus a cloud computing proposition be assembled by a set of software, computer servers, storage, networks and power and cooling/air handling services provided by third parties?

Go circular

The emphasis on services rather than goods is a central idea of the new industrial model, circular economy, that is now gradually taking shape.

Circular economy draws a sharp distinction between the consumption and the use of CircularEconomymaterials. It is based on a ‘functional service’ model in which manufacturers retain the ownership of their products and, where possible, act as service providers—selling the use of products, not their one-way consumption as in the current industrial model of linear economy. In this new industrial model the goal of manufacturers is shifting; selling results rather than equipment, performance and satisfaction rather than products.

Examples

An example of this new approach is Philips, the global leader in LED lighting systems who has recently closed a deal with the Washington Metropolitan Area Transit Authority (WMATA) to provide 25 car parks with a LED lighting service. Philips will monitor and maintain the lighting solution based on a lighting-as-a-service model (Pay-per-Lux model).

As expressed by Philips the implications from a business process perspective are profound. Out the window goes the traditional, linear approach to resource use: namely, extract it, use it and then dump it. Instead, management focus turns to principles such as re-manufacturing, refurbishment and reuse.

Another example is InterfaceFLOR. As part of their drive to increase the inherent level of sustainability of their business, they do not sell the carpet as a product, they lease it as a service. That is supply, install, maintain and replace the carpet.

Walk the talk

Back to the cloud computing provider. Why bothering on the life cycle management of all the components you need? Why the burden of managing the buying, installing, maintaining, replacing, decommissioning processes?

Why not doing what you preach to your customer and start using the X-as-a-Service model for your own needs?

===

See also the blog post Data centers and Mount sustainability or if you want to know more on circular economy download a free copy of the book SenSe & SuStainability from the Ellen Macarthur foundation 

=====

Needed: a Six Sigma Datacenter

As usual there was a lot of discussion on cooling and energy efficiency at the yearly DatacenterDynamics conference in Amsterdam last week. Finding point solutions to be efficient and/or creating redundancy to circumvent possible technical risks. But is this the way to go to optimise a complex IT supply chain?

In a lot of industries statistical quality management methods are used to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimising variability in manufacturing and business processes. One of the more popular methods is Six Sigma which utilises the DMAIC phases Define, Measure, Analyse, Improve and Control to improve processes.

But when Eddie Desouza of Enlogic asked the audience (of one of the tracks at DatacenterDynamics) who was using the Six Sigma method to improve their datacenters only three people raised their hand out of hundred. Eddie Desouza was advocating the use of Six Sigma to improve the efficiency and the quality of a datacenter. He made the observation that datacenters do apply substantial upfront reliability analysis and invest in costly redundant systems, but rarely commit to data-driven continuous improvement philosophies. In other words focussing on fixing errors instead of focussing on optimising the chain by reducing unwanted variability and reducing the associated costs of poor quality.

He also, rightly, emphasised that datacenter operators should use a system approach instead of a component approach in optimising the datacenter. The internal datacenter supply chain is as strong as its weakest link and there is also the risk of sub-optimisation.

An example of the necessity to use a system approach and to use industry methods like Six Sigma can be found in a blog post of Alex Benik about “the sorry state of server utilization”. He refers to some reports from the past five years:

• A McKinsey study in 2008 pegging data-center utilization at roughly 6 percent.

• A Gartner report from 2012 putting industry wide utilization rate at 12 percent.

• An Accenture paper sampling a small number on Amazon EC2 machines finding 7 percent utilization over the course of a week.

• Charts and quote from Google, which show three-month average utilization rates for 20,000 server clusters. A typical cluster spent most of its time running between 20-40 percent of capacity, and the highest utilization cluster reaches such heights (about 75 percent) only because it’s doing batch work.

Or take a look from another source, the diagram below of the Green Grid:

 UnusedServers

Why is this overlooked? Why isn’t there a debate about this weak link, this huge under-utilisation of servers and as a result the huge energy wasting? Why focussing on cooling, UPS, etc. if we have this weak link in the datacenter?

As showed in another blog post, saving 1 unit power consumption in information processing saves us about 98 units in the upstream of the power supply chain (that is up to the power plant).

So it is very nice to have a discussion about the energy efficiency of datacenter facility components but what is it worth if you have this “sorry state of server utilisation” and that it isn’t noticed and/or that no action is taken on this? Eddie Desouza of Enlogic is right, datacenters need Six Sigma. It would help if datacenter operators would embrace a system approach. Focussing on the complete internal  datacenter supply chain instead of a component approach, and using statistical quality management methods to improve efficiency and quality as in other industries.

Sourcing IT: Cloud Computing Roadblocks

Roadblocks

Cloud computing, which is part of the widespread adoption of a Service Oriented Business approach, becomes pervasive, and is rapidly evolving with new propositions and services. Therefore organisations are faced with the question how these various cloud propositions from different providers will work together to meet business objectives.

The latest cloud computing study of 451 Research showed some interesting key findings:

  1. Sixty percent of respondents view cloud computing as a natural evolution of IT service delivery and do not allocate separate budgets for cloud computing projects.
  2. Despite the increased cloud computing activity, 83% of respondents are facing significant roadblocks to deploying their cloud computing initiatives, a 9% increase since the end of 2012. IT roadblocks have declined to 15% while non-IT roadblocks have increased to 68% of the sample, mostly related to people, processes, politics and other organizational issues.
  3. Consistent with many other enterprise cloud computing surveys, security is the biggest pain point and roadblock to cloud computing adoption (30%). Migration and integration of legacy and on-premise systems with cloud applications (18%) is second, lack of internal process (18%) is third, and lack of internal resources/expertise (17%) is fourth.

It looks like that many organizations believe in a fluent evolution of their current IT infrastructure towards a cloud computing environment. Where on the other hand, right now, organisations are facing significant roadblocks.

Remarkably in the top four of roadblocks that are mentioned in this study, one very important roadblock is missing.

The cloud computing service models, offers the promise of massive cost savings combined with increased IT agility based on the assumption of:

  • Delivering IT commodity services.
  • Improved IT interoperability and portability.
  • A competitive and transparent cost model on a pay-per-use basis.
  • The quiet assumption that the service provider act on behalf and in the interest of the customer.

SiloBuster2

So with cloud computing you could get rid of the traditional proprietary, costly andinflexible application silos. These traditional application silos should be replaced by an assembly of standardised cloud computing building blocks with standard interfaces that ensures interoperability.

But does the current market offer standardized cloud computing building blocks and interoperability?

Commodity

Currently the idea is that cloud computing comes in three flavors. This is based on the reference model of the NIST institute [1]:

  1. Cloud Software as a Service (SaaS); “The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).”
  2. Cloud Platform as a Service (PaaS); “The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.”
  3. Cloud Infrastructure as a Service (IaaS); “The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.”

Each standard service offering (SaaS, PaaS, IaaS) has a well-defined interface. The consequence of this is that the consumer can’t manage or control the underlying components of the platform that is provided. The platform offers the service as-is. Therefore the service is an IT commodity service, customization is by definition not possible [2].

But is this a realistic picture of the current landscape? In reality the distinction between IaaS, PaaS, and SaaS is not so clear. Providers are offering all kind of services that don’t fit well in this 3 flavor scheme. Johan den Haan, CTO of Mendix, wrote a nice blog about this topic where he propose a more detailed framework to categorize the different approaches seen on the market today.

Besides a more granular description of cloud computing services, a distinction is made between compute, storage , and networking. Which aligns very well with the distinction that can be made from a software perspective; behavior (vs. compute), state (vs. storage), and messages (vs networking). The end result is a framework with 3 columns and 6 layers as showed in the image below.

Cloud Platform Framework. Courtesy to Johan den Haan
Cloud Platform Framework. Courtesy to Johan den Haan.
  • Layer 1: The software-defined datacenter.
  • Layer 2: Deploying applications.
  • Layer 3: Deploying code.
  • Layer 4: Model/process driven deployment of code.
  • Layer 5: Orchestrating pre-defined building blocks.
  • Layer 6: Using applications.

 While layer 2 is focused on application infrastructure, layer 3 shifts the focus to code. In other words: layer 2 has binaries as input, layer 3 has code as input.

The framework shows the complexity organisations are facing when they want to make the transition to cloud computing. What kind of interfaces or API’s are offered by the different cloud providers are they standardized or proprietary? What does this means for migration and integration?

Interoperability

The chair of the IEEE Cloud Computing Initiative, Steve Diamond[3], stated that “Cloud computing today is very much akin to the nascent Internet – a disruptive technology and business model that is primed for explosive growth and rapid transformation.“ However, he warns that “without a flexible, common framework for interoperability, innovation could become stifled, leaving us with a siloed ecosystem.”

Clouds cannot yet federate and interoperate. Such federation is called the Intercloud. The concept of a cloud operated by one service provider or enterprise interoperating with a cloud operated by another provider is a powerful means of increasing the value of cloud computing to industry and users. IEEE is creating technical standards (IEEE P2302) for this interoperability.

The Intercloud architecture they are working on is analogous to the Internet architecture. There are public clouds, which are analogous to ISPs and there are private clouds, which an organization builds to serve itself (analogous to an Intranet). The Intercloud will tie all of these clouds together.

The Intercloud contains three important building blocks:

  • Intercloud Gateways; analogous to Internet routers, connects a cloud to the Intercloud.
  • Intercloud Exchanges; analogous to Internet exchanges and peering points (called brokers in the US NIST Reference Architecture) where clouds can interoperate.
  • Intercloud Roots; Services such as naming authority, trust authority, messaging, semantic directory services, and other root capabilities. The Intercloud root is not a single entity, it is a globally replicated and hierarchical system.
InterCloud Architecture. Courtesy to IEEE.
InterCloud Architecture. Courtesy to IEEE.

According to IEEE: “The technical architecture for cloud interoperability used by IEEE P2302 and the Intercloud is a next-generation Network-to-Network Interface (NNI) ‘federation’ architecture that is analogous to the federation approach used to create the international direct-distance dialing telephone system and the Internet. The federated architecture will make it possible for Intercloud-enabled clouds operated by disparate service providers or enterprises to seamlessly interconnect and interoperate via peering, roaming, and exchange (broker) techniques. Existing cloud interoperability solutions that employ a simpler, first-generation User-to-Network Interface (UNI) ‘Multicloud’ approach do not have federation capabilities and as a result the underlying clouds still function as walled gardens.”

Lock-in

The current lack of standard cloud services with non proprietary interfaces and API’s and the missing of an operational cloud standard for interoperability can cause all kinds of  lock-in situations. We can distinguish four types of lock-in [2]:

  1. Horizontal lock-in; restricted ability to replace with comparable service/product.
  2. Vertical lock-in; solution restricts choice in other levels of the value chain.
  3. Inclined lock-in; less than optimal solution is chosen because of one-stop shopping policy.
  4. Generational lock-in; solution replacement with next-generation technology is prohibitively expensive and/or technical, contractual impossible.

Developing interoperability and federation capabilities between cloud services is considered a significant accelerator of market liquidity and lock-in avoidance.

The cloud computing market is still an immature market. One implication of this is that organisations need to take a more cautious and nuanced approach to IT sourcing and their journey to the clouds.

A proper IT infrastructure valuation, based on well-defined business objectives, demand behavior, functional and technical requirements and in-depth cost analysis, is necessary to prevent nasty surprises [2].

References

[1] Mell, P. & Grance, T., 2011, ‘The NIST Definition of Cloud Computing’, NSIT Special Publication 800-145, USA

[2] Dijkstra, R., Gøtze, J., Ploeg, P.v.d. (eds.), 2013, ‘Right Sourcing – Enabling Collaboration’, ISBN 9781481792806

[3] IEEE, 2011, ’IEEE launches pioneering cloud computing initiative’,  http://standards.ieee.org/news/2011/cloud.html

Available: IT Sourcing Textbook for the Classroom

RightSourcingCoversmallJust before the summer I and two other fellow editors published a book about IT sourcing that is also suitable for the classroom. By presenting perspectives on IT sourcing from 21 different contributors, we as editors hope to enable and inspire readers to make better-informed IT Sourcing decisions.

We received some nice endorsements:

“What most impressed me about this book is the scope of it’s coverage, and the level of academic rigor behind the analysis. The broad scope makes this relevant to senior executives concerned with strategy, operational executives accountable for results, and technologist on the ground. The academic rigor gives me confidence that the findings and recommendations are sound. This book will be the reference guide for anyone seriously involved in strategic sourcing.”
R. Lemuel Lasher
Global Chief Innovation Officer, CSC

“Thought provoking, occasionally frustrating and timely! As the theory of the firm is “tested” with evolving technology and globalization driving down transaction costs and enabling greater connectivity we’re presented with many different possibilities for business operating models. By exploring the perspectives of organization, economics, technology and people this book provides the reader with a compendium of theory, ideas and practical tips on “Right Sourcing” the business of IT and enabling different business models. The slightly idiosyncratic nature of a book with contributions from different authors only serves to engage the reader in the discussion. I hope the editors find a way to continue this discussion beyond the book!”
Adrian Apthorp
Head of Enterprise Architecture, DHL Express Europe

“Sourcing is a business theme which gets more and more attention. But making the right decisions is not easy. Sourcing is a wicked problem. This book provides valuable insights and concepts that will help to improve decisions with regard to sourcing. I would recommend this book to anyone who wants to achieve right sourcing.”
Martin van den Berg
Enterprise Architect, Co-Founder of DYA and author of several books, including “Dynamic Enterprise Architecture: How to Make It Work”.

“Sourcing is becoming an increasingly complex task – one that requires fundamental changes in management thinking, radical new ways in which to communicate and deal with knowledge, and a totally new and different view of all the stakeholders. In this book leading thinkers in this space, do a great job in opening up the reader’s mind to possibilities for alternative solutions that integrate the human aspects in everything we do.”
François Gossieaux
Co-President Human 1.0 and author of “The Hyper-Social Organization”

The book Right Sourcing helps undergraduate students to better understand and appreciate the topic of sourcing the information processing function of an organization.
Shortening time to market, huge transaction volumes, 24 x 7 business at lesser cost puts a burden on organizations. How should one adapt to the increasing complexity and changes in the organization and its environment?
Sourcing the information processing function of an organization is covered from different perspectives and light is shed on how we can assure that the chosen solution is in line with the business strategy, business models, business plans and the technology that is available
The book puts forward the proposal that the modern enterprise must fundamentally rethink its ‘sourcing equation’ to become or remain viable.
It is ideal for tomorrow’s decision makers who need to understand the requirements of how best to source the people, services and products it needs, to deliver its business model and keep its commitments to all the stakeholders.
Editors:Rien Dijkstra
John Gøtze 
Pieter van der Ploeg ISBN: 978-1481792806

List price: 

23.96 USD/
20,60 EUR/
15.04 GBP

286 pages
Pub Date: May 2013

KEY FEATURES:

  • Explains management and design choices along with tradeoffs to consider when sourcing information systems and/or technology that run in an enterprise environment.
  • Explores trending topics such as cloud computing, SOA, security, complexity/chaos & organizations, and cross cultural collaboration.
  • Explores sourcing from the perspectives of organization, economics, technology and people.
Website: www.sourcing-it.org
Amazon: http://amzn.to/GzrqaT