Mobility and Cloud Computing and the need for a new security concept

RestrictedThe current modern way of doing business and service provisioning is based on openness and agility.
This brings the traditional security concept in which an organization is positioned as an “information fortress” strongly under pressure. The traditional perimeter is vanishing and sensitive information travel outside your organization in many ways on many different devices. Mobility and Cloud Services is the name of the game.

For the most part business information is stored as unstructured data. Unstructured data refers to information that either does not have a pre-defined data model or is not organized in a pre-defined manner and is usually stored in files instead of databases.

This context creates a strong need for other security concepts. Instead of securing a predefined and fixed “location” the focus must be shifted to the actual security of information that is mobile; information security at the file level. Or in other words file protection and control beyond the perimeters of the enterprise because the rule set is stored with the file itself.

There are security products on the market that makes it possible to achieve security at the file level. Applying this security concept will affect the current and usual roles and activities related to security.

Still it is common that a security officer in cooperation with the system administrator ensures safe work areas to which firewalls, passwords and malware scanners protect access. The user organization is consulted and her voice, the voice of the customer, is found in the general security policy but the focus is on the security of the IT infrastructure. So bottom line security is an IT issue.

By implementing a security concept based on information security on the file level, one comes in very close contact with the daily operational activities of the users and the organization. Security choices have a much more immediate and greater impact on the work and activities than in a traditional security concept.

Who has when and where access to which information? The access management can be excellently managed by using the new file access management tools. But access management is not an isolated topic. Access is granted to individuals and groups, thus the topic of Identity Management comes in sight and should be very well organized. Access management is part of the triple A; authentication (identity management), authorization (access management) and accounting. These are well-known concepts in the security world but they were mainly applied to the level of IT infrastructure. Now that it is urgently needed to shift the focus of the security of the IT infrastructure to the security of information, one comes in very close contact with the daily work of the organization. This includes authorization, as well as authentication and accounting.

In the traditional division of labor a security officer defines a security policy, based on the input from the user organization and rules and legislation, and then align with the IT organization what mechanisms will be used to operationalize this policy. Product selection, technical equipping and the daily operation are performed by the IT organization. Thus the questions about why, how and with what are satisfied. In fact the user organization is only involved with the first question and it plays no role in the implementation. Also the security officer is hardly involved in the daily operations of the IT organization and the user organization. It is a slow top-down approach.
In fact in terms of a responsibility assignment matrix (RACI), one could say that the security officer is accountable, the system administrator is responsible and the user organization is consulted and informed.

The application of information security through access management at the file level puts this traditional work under pressure. The granularity of the access rights on file level is very fine-grained, in combination with the dynamics of working processes, the needed agility, and the shortening of time to deliver, is conflicting with the traditional hierarchical top down approach.

So to solve this issue a different way of working is required, where the user organization should be much more involved in information security or better stated they should in fact take the lead. After all it affects their daily work. The security officer, the auditor, and the IT organization must be well aware about the daily work of the user organization. They should gain knowledge about what is going on in the workplace to ensure that the access management is workable, that it produces the desired result, and meets the expectations of the user organization. Additionally, identity management and accounting at this micro level should also be taken into account.

To get information security by means of access management at the file level it is advisable to take a closer look at the different roles that are involved. These are the security officer, the auditor, the system administrator and the “super user” or functional administrator of the access rules for the user organization.

• The security officer is dedicated to define the information security policy and with which mechanisms (the solution approach regarding the security organization, workflow and technology) this should be realized. On behalf of the user organization.
• The auditor is the one that shows how accounting should take place (what are the control points, which information should be captured to comply with laws and regulations, what is the audit trail) and executes audits.
• The system administrator is the one that operationalizes the access management within the framework of the security officer and the auditor, and also takes care of the relationship (in terms of technology and execution) with the work areas of identity management and monitoring (accounting).
• The “super user” or functional administrator of the user organization is the one that actually manages the access rules within the framework of the security officer and the auditor.

To support the modern way of doing business and service provisioning we need to create an agile security organization, with a transparent separation of interests and responsibilities. Instead of a hierarchical security organization, a flat organization is needed where the “super user” is accountable, the system administrator is responsible, the security officer and auditor are consulted and the end-users get informed.

Cloud Computing, ownership matters

In the past fifteen years, many internal IT departments of enterprises evolved from artisan organizations that only assembled and provided customized, tailor-made products, to hybrid craft and mass production organizations that provides custom as well as standard products. But nowadays these IT departments are confronted with external organizations that deliver standard services and products that can be easily adapted to the needs of the customer based on the concept of “mass customization”.
Instead of buying, owning, and managing all kinds of IT components by yourself, nowadays the IT infrastructure is offered as a service by means of cloud computing.

There is a shift from “goods dominant logic” to a “service dominant logic”, were the focus is shifting away from tangibles and toward intangibles. This trend is supported by the use of virtualization technology for server, storage and network devices and also for applications.

The cloud computing offering of lower costs, shorter time to market, and on demand provisioning makes it very tempting for organizations to outsource their IT infrastructure and services.

But don’t we forget something? One of the most important things of information processing is that an organization has the right controls over the use of applications, data and infrastructure. Incomplete control can lead to all kinds of issues about business obligations and liabilities.

The control of these items is arranged by contracts, which is in fact an exchange of property rights. These property rights are a bit complicated because they have several dimensions:
• The right of possession
• The right of control
• The right of exclusion (access rights)
• The right of enjoyment (earn income from it)
• The right of disposition (buying or selling)

The consequence of these different dimensions is that different parties are able to hold partitions of rights to particular elements of a resource. On top of this there is the issue of legislation. When we talk about ownership we have to be careful because in legal systems ownership is based on tangible/physical objects. And yes of course, we have legislation about intellectual property, trademarks, etc. but when it comes to virtualized objects it becomes murky. Also cloud computing is about delivering services (intangibles) not about goods (tangibles).

The transition from “goods dominant logic” to a “service dominant logic” is a mind shift where the “bundle of rights” or property ownership still matters.

Signing cloud computing deals is not only about money and provisioning it is also about control. When a cloud computing sourcing deal is taking place the partitions of property rights should be grouped into appropriate bundles to stay in control.

DataCloud 2015 Awards

Datacloud2015

 

For the fifth year on row being part of the panel of Judges for the ‘DataCloud Awards’, the winners of the Data Cloud Awards 2015 will be announced in Monaco the 2nd of June 2015.

Read the brochure for more information about these event.

Over the past years, award winners have included companies such as Equinix, TelecityGroup, Interxion, Cohesive FT, iomart, Colt Technology Services, Cofely GDS Suez/akquinet GmbH, Claranet, 6Degrees, Prior1 GmbH, The Open Compute Foundation, Portugal Telecom, and NTT Communications among many others…

Now in its 8th year, the prestigious Awards provide exceptional marketing differentiation for both winners and runners up and recognized as the premier industry accolade across Europe and internationally.

With an extended panel of expert Judges, the 2015 categories, including new Cloud Awards, are designed to reflect changes in markets and technologies, but retain a focus on recognising best in class and excellence across the industry. 

In 2015 the Awards will again provide new benchmarks of aspiration for the companies shortlisted and world class recognition for winners.

Data Center 2.0 – The Sustainable Data Center, Now Available!

Data Center 2.0 – The Sustainable Data Center is now available. Data Center 2.0

The book is showing up on the websites of Amazon and will soon starts to pop up on websites of other  E-tailers’ .

Data Center 2.0 – The Sustainable Data Center is an in-depth look into the steps needed to transform modern-day data centers into sustainable entities.

See the press release:

Some nice endorsements were received:

“Data Center 2.0, is not so much about technology but about people, society and economic development. By helping readers understand that even if Data Centers, enabling the Digital economy, are contributing a lot to energy saving, they need to be sustainable themselves; Rien Dijkstra is on the right track. When explaining how to build sustainable Data Centers, through multi disciplinary approach, breaking the usual silos of the different expertise, Rien Dijkstra is proposing the change of behavior needed to build sustainable Data Centers. Definitely it is about people, not technology.” 

Paul-Francois Cattier, Global Senior Vice-President Data Center – Schneider Electric

“In Data Center 2.0 The Sustainable Data Center author Rien Dijkstra has gone several steps further in viewing the data center from the perspective of long term ownership and efficiency in combination with treating it as a system. It’s an excellent read with many sections that could be extracted and utilized in their own right. I highly recommend this read for IT leaders who are struggling with the questions of whether to add capacity (co-locate, buy, build, or lease) or how to create a stronger organizational ownership model for existing data center capacity. The questions get more complex every year and the risks more serious for the business. The fact that you’re making a business critical decision that must stand the test of technology and business change over 15 years is something you shouldn’t take lightly.” 

Mark Thiele, President and Founder Data Center Pulse

“Data centers used to be buildings to house computer servers along with network and storage systems, a physical manifestation of the Digital Economy. Internet of Things, the digitization of about everything in and around us, brings many profound changes. A data center is the place where it all comes together. Physical and digital life, fueled by energy and IT, economical and social demands and needs and not to forget sustainability considerations. Sustainable data centers have a great potential to help society to optimize the use of resources and to eliminate or reduce wastes of capital, human labor and energy. A data center in that sense is much more than just a building for servers. It has become a new business model. Data center 2.0 is a remarkable book that describes the steps and phases to facilitate and achieve this paradigm.” 

John Post, Managing Director – Foundation Green IT Amsterdam region

Data Center 2.0 – The Sustainable Data Center

DC20_SustainableDataCenter
Currently busy with the final steps to get the forthcoming book ‘Data Center 2.0 – The Sustainable Data Center’ (ISBN 978-1499224689) published at the beginning of the summer.

Some quotes from the book:

“A data center is a very peculiar and special place. It is the place where different worlds meet each other. A place where organizational (and individual) information needs and demands are translated in bits and bytes that are subsequently translated in electrons that are moved around the world. It is the place where the business, IT and energy world come together. Jointly they form a jigsaw puzzle of stakeholders with different and sometimes conflicting interests and objectives that are hard to manage and to control.Data Center 2.0

Given the great potential of Information Technology to transform today’s society into one characterised by sustainability what is the position of data centers?

……..

The data center is the place were it all comes together: energy, IT and societal demands and needs.

…….

A sustainable data center should be environmentally viable, economically equitable, and socially bearable. To become sustainable, the data center industry must free itself from the shackles of 19th century based ideas and concepts of production. They are too simple for our 21th century world.

The combination of service-dominant logic and cradle-to-cradle makes it possible to create a sustainability data center industry.

Creating sustainable data centers is not a technical problem but an economic problem to be solved.”

The book takes a conceptual approach on the subject of data centers and sustainability. It offers at least multiple views and aspects on sustainable data centers to allow readers to gain a better understanding and provoke thoughts on how to create sustainable data centers.

The book has already received endorsements of Paul-Francois Cattier Global Senior, Vice President Data Center of Schneider Electric and John Post, Managing Director of Foundation  Green IT Amsterdam region.

Table of contents

1 Prologue
2 Signs Of The Time
3 Data Centers, 21th Century Factories
4 Data Centers A Critical Infrastructure
5 Data Centers And The IT Supply Chain
6 The Core Processes Of A Data Center
7 Externalities
8 A Look At Data Center Management
9 Data Center Analysis
10 Data Center Monitoring and Control
11 The Willingness To Change
12 On The Move: Data Center 1.5
13 IT Is Transforming Now!
14 Dominant Logic Under Pressure
15 Away From The Dominant Logic
16 A New Industrial Model
17 Data Center 2.0

The as-a-Service Datacenter, a new industrial model

It is said that cloud computing is improving business agility because of the ability to rapidly and inexpensively provision technological infrastructure resources on a pay-per-use basis. So customers are urged not to buy and own hardware and software for themselves but instead they should make use of cloud computing services that are offered by the cloud computing providers.

To put it another way, what is the point of owning hardware and software? Because the only thing you want to do with it is using it at the time you need it. The cloud computing proposition of on-demand delivery on a pay-per-use basis more or less removes the necessity to possess hardware and software.

But is this XaaS wisdom, “X-as-a-Service” as preached by the cloud computing providers also used by them selves?

 Service approach

An datacenter is an assembly of software, computer servers, storage, networks and power and cooling/air handling components. With these means the cloud computing provider assembles its cloud computing services. But is there a need for these providers to own these components?

Can a datacenter and thus a cloud computing proposition be assembled by a set of software, computer servers, storage, networks and power and cooling/air handling services provided by third parties?

Go circular

The emphasis on services rather than goods is a central idea of the new industrial model, circular economy, that is now gradually taking shape.

Circular economy draws a sharp distinction between the consumption and the use of CircularEconomymaterials. It is based on a ‘functional service’ model in which manufacturers retain the ownership of their products and, where possible, act as service providers—selling the use of products, not their one-way consumption as in the current industrial model of linear economy. In this new industrial model the goal of manufacturers is shifting; selling results rather than equipment, performance and satisfaction rather than products.

Examples

An example of this new approach is Philips, the global leader in LED lighting systems who has recently closed a deal with the Washington Metropolitan Area Transit Authority (WMATA) to provide 25 car parks with a LED lighting service. Philips will monitor and maintain the lighting solution based on a lighting-as-a-service model (Pay-per-Lux model).

As expressed by Philips the implications from a business process perspective are profound. Out the window goes the traditional, linear approach to resource use: namely, extract it, use it and then dump it. Instead, management focus turns to principles such as re-manufacturing, refurbishment and reuse.

Another example is InterfaceFLOR. As part of their drive to increase the inherent level of sustainability of their business, they do not sell the carpet as a product, they lease it as a service. That is supply, install, maintain and replace the carpet.

Walk the talk

Back to the cloud computing provider. Why bothering on the life cycle management of all the components you need? Why the burden of managing the buying, installing, maintaining, replacing, decommissioning processes?

Why not doing what you preach to your customer and start using the X-as-a-Service model for your own needs?

===

See also the blog post Data centers and Mount sustainability or if you want to know more on circular economy download a free copy of the book SenSe & SuStainability from the Ellen Macarthur foundation 

=====

Sourcing IT: Cloud Computing Roadblocks

Roadblocks

Cloud computing, which is part of the widespread adoption of a Service Oriented Business approach, becomes pervasive, and is rapidly evolving with new propositions and services. Therefore organisations are faced with the question how these various cloud propositions from different providers will work together to meet business objectives.

The latest cloud computing study of 451 Research showed some interesting key findings:

  1. Sixty percent of respondents view cloud computing as a natural evolution of IT service delivery and do not allocate separate budgets for cloud computing projects.
  2. Despite the increased cloud computing activity, 83% of respondents are facing significant roadblocks to deploying their cloud computing initiatives, a 9% increase since the end of 2012. IT roadblocks have declined to 15% while non-IT roadblocks have increased to 68% of the sample, mostly related to people, processes, politics and other organizational issues.
  3. Consistent with many other enterprise cloud computing surveys, security is the biggest pain point and roadblock to cloud computing adoption (30%). Migration and integration of legacy and on-premise systems with cloud applications (18%) is second, lack of internal process (18%) is third, and lack of internal resources/expertise (17%) is fourth.

It looks like that many organizations believe in a fluent evolution of their current IT infrastructure towards a cloud computing environment. Where on the other hand, right now, organisations are facing significant roadblocks.

Remarkably in the top four of roadblocks that are mentioned in this study, one very important roadblock is missing.

The cloud computing service models, offers the promise of massive cost savings combined with increased IT agility based on the assumption of:

  • Delivering IT commodity services.
  • Improved IT interoperability and portability.
  • A competitive and transparent cost model on a pay-per-use basis.
  • The quiet assumption that the service provider act on behalf and in the interest of the customer.

SiloBuster2

So with cloud computing you could get rid of the traditional proprietary, costly andinflexible application silos. These traditional application silos should be replaced by an assembly of standardised cloud computing building blocks with standard interfaces that ensures interoperability.

But does the current market offer standardized cloud computing building blocks and interoperability?

Commodity

Currently the idea is that cloud computing comes in three flavors. This is based on the reference model of the NIST institute [1]:

  1. Cloud Software as a Service (SaaS); “The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email).”
  2. Cloud Platform as a Service (PaaS); “The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.”
  3. Cloud Infrastructure as a Service (IaaS); “The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications.”

Each standard service offering (SaaS, PaaS, IaaS) has a well-defined interface. The consequence of this is that the consumer can’t manage or control the underlying components of the platform that is provided. The platform offers the service as-is. Therefore the service is an IT commodity service, customization is by definition not possible [2].

But is this a realistic picture of the current landscape? In reality the distinction between IaaS, PaaS, and SaaS is not so clear. Providers are offering all kind of services that don’t fit well in this 3 flavor scheme. Johan den Haan, CTO of Mendix, wrote a nice blog about this topic where he propose a more detailed framework to categorize the different approaches seen on the market today.

Besides a more granular description of cloud computing services, a distinction is made between compute, storage , and networking. Which aligns very well with the distinction that can be made from a software perspective; behavior (vs. compute), state (vs. storage), and messages (vs networking). The end result is a framework with 3 columns and 6 layers as showed in the image below.

Cloud Platform Framework. Courtesy to Johan den Haan
Cloud Platform Framework. Courtesy to Johan den Haan.
  • Layer 1: The software-defined datacenter.
  • Layer 2: Deploying applications.
  • Layer 3: Deploying code.
  • Layer 4: Model/process driven deployment of code.
  • Layer 5: Orchestrating pre-defined building blocks.
  • Layer 6: Using applications.

 While layer 2 is focused on application infrastructure, layer 3 shifts the focus to code. In other words: layer 2 has binaries as input, layer 3 has code as input.

The framework shows the complexity organisations are facing when they want to make the transition to cloud computing. What kind of interfaces or API’s are offered by the different cloud providers are they standardized or proprietary? What does this means for migration and integration?

Interoperability

The chair of the IEEE Cloud Computing Initiative, Steve Diamond[3], stated that “Cloud computing today is very much akin to the nascent Internet – a disruptive technology and business model that is primed for explosive growth and rapid transformation.“ However, he warns that “without a flexible, common framework for interoperability, innovation could become stifled, leaving us with a siloed ecosystem.”

Clouds cannot yet federate and interoperate. Such federation is called the Intercloud. The concept of a cloud operated by one service provider or enterprise interoperating with a cloud operated by another provider is a powerful means of increasing the value of cloud computing to industry and users. IEEE is creating technical standards (IEEE P2302) for this interoperability.

The Intercloud architecture they are working on is analogous to the Internet architecture. There are public clouds, which are analogous to ISPs and there are private clouds, which an organization builds to serve itself (analogous to an Intranet). The Intercloud will tie all of these clouds together.

The Intercloud contains three important building blocks:

  • Intercloud Gateways; analogous to Internet routers, connects a cloud to the Intercloud.
  • Intercloud Exchanges; analogous to Internet exchanges and peering points (called brokers in the US NIST Reference Architecture) where clouds can interoperate.
  • Intercloud Roots; Services such as naming authority, trust authority, messaging, semantic directory services, and other root capabilities. The Intercloud root is not a single entity, it is a globally replicated and hierarchical system.
InterCloud Architecture. Courtesy to IEEE.
InterCloud Architecture. Courtesy to IEEE.

According to IEEE: “The technical architecture for cloud interoperability used by IEEE P2302 and the Intercloud is a next-generation Network-to-Network Interface (NNI) ‘federation’ architecture that is analogous to the federation approach used to create the international direct-distance dialing telephone system and the Internet. The federated architecture will make it possible for Intercloud-enabled clouds operated by disparate service providers or enterprises to seamlessly interconnect and interoperate via peering, roaming, and exchange (broker) techniques. Existing cloud interoperability solutions that employ a simpler, first-generation User-to-Network Interface (UNI) ‘Multicloud’ approach do not have federation capabilities and as a result the underlying clouds still function as walled gardens.”

Lock-in

The current lack of standard cloud services with non proprietary interfaces and API’s and the missing of an operational cloud standard for interoperability can cause all kinds of  lock-in situations. We can distinguish four types of lock-in [2]:

  1. Horizontal lock-in; restricted ability to replace with comparable service/product.
  2. Vertical lock-in; solution restricts choice in other levels of the value chain.
  3. Inclined lock-in; less than optimal solution is chosen because of one-stop shopping policy.
  4. Generational lock-in; solution replacement with next-generation technology is prohibitively expensive and/or technical, contractual impossible.

Developing interoperability and federation capabilities between cloud services is considered a significant accelerator of market liquidity and lock-in avoidance.

The cloud computing market is still an immature market. One implication of this is that organisations need to take a more cautious and nuanced approach to IT sourcing and their journey to the clouds.

A proper IT infrastructure valuation, based on well-defined business objectives, demand behavior, functional and technical requirements and in-depth cost analysis, is necessary to prevent nasty surprises [2].

References

[1] Mell, P. & Grance, T., 2011, ‘The NIST Definition of Cloud Computing’, NSIT Special Publication 800-145, USA

[2] Dijkstra, R., Gøtze, J., Ploeg, P.v.d. (eds.), 2013, ‘Right Sourcing – Enabling Collaboration’, ISBN 9781481792806

[3] IEEE, 2011, ’IEEE launches pioneering cloud computing initiative’,  http://standards.ieee.org/news/2011/cloud.html