In the book “The Big Switch” Nicholas Carr makes a historical analysis to build the idea that the data centers/ Internet is following the same developmental path as electric power did 100 years ago. At that time companies stopped generating their own power and plugged into the newly built electric grid that was fed with electric energy by huge generic power plants. The big switch is between today’s proprietary corporate data centers to what Carr calls the world wide computer, basically the cloud with some huge generic data centers that provides web services that will be as ubiquitous, networked and shared as electricity now is. This modern cloud computing infrastructure is following the same structure as the electricity infrastructure: the plant (data center), transmission network (Internet) and distribution networks (MAN, (W)LAN) to give process power and storage capacity to all kind of end-devices. A nice metaphor but is the metaphor right? Is the current power grid architecture able to accommodate the ever rising energy demands? And by taking the current power grid architecture as an example for the IT infrastructure architecture do we really get a sustainable, robust IT infrastructure? Not everybody is following this line of reasoning.
Instead of relying only on centralized data centers there is another solution, another paradigm, that is much more focussing on an intelligent localized delivery of service, the nano data center as discussed in an earlier blog entry. These two kind of solutions can even be mixed in a hybrid service model where a macro, centralized, delivery model works together with a localized delivery model using intelligent two-way digital technology to control power supply. An example of this hybrid approach is developed in Amsterdam by the OZZO project. The OZZO Project’s mission is to ‘Build an energy-neutral data center in the Amsterdam Metropolitan Area before the end of 2015. This data center produces by itself all the energy it needs, with neither CO2 emission nor nuclear power.’
According to OZZO the data center should function within smart, three-layer grid: for data, electrical energy, and thermal energy. These are preconditions, as is full encryption of all data involved for security and privacy reasons. Possible and actual energy generation and reuse at a given point in the grid serve as drivers for data center or node allocation, size, capacity, and use. Processing and storage move fluidly over the grid in response to real-time local facility and energy intelligence, always looking for optimum efficiency.
The motto of OZZO is ‘Data follows energy’. In there HotColdFrozenData(™) concept, an intelligent distinction is made between high-frequency use data and low-frequency use data. On average, offices and individuals use and change 11% of their data intensively, i.e., every day (hot); 15% is seldom accessed (cold); and 74% is practically never looked at any more (frozen). Special software can classify data streams real-time. After classification and segmentation, data is deduplicated, consolidated, and stored separately on appropriate media. Data can change classifications from hot to cold to frozen, but frozen data can also become hot at times.
In this way a sustainable Information Smart Grid is built based on several kinds of nodes. OZZO is not following the evolutionary path as described by Nicholas Carr. That is a traditional scale up of capacity by centralization and a simple-minded reach for economy of scale neglecting the tradeoffs of growing management complexity of the central node and capacity issues of the network (Internet and the power grid). In answering the question ‘Build an energy-neutral data center’ or ‘How to eat an elephant’ OZZO chooses for a divide-and-conquer strategy. By creating a new architecture with different types of nodes (‘data centers’) they want to create a sustainable distributed grid and take care of the issues that accompanies a centralization approach.