Data centers, reducing energy usage by heating and distribution

In delivering computer process power and storage capacity there are apparently two opposite approaches, the cloud computing mega data center  “bigger is better” and the local nano data center “small is beautiful”. The current “bigger is better” model of cloud computing leads, although shifted from customer to supplier, still to enormous capital expenditures, problems in power usage and cooling, power supply (Critical Areas for Transmission Congestion) and leads also to some structural vulnerabilities in terms of resiliency and availability of the infrastructure. The alternative p2p data center approach leads to questions about delivering enough “horse power”, network capacity, network supply and the governance of such a distributed system.

A new paper from Microsoft Research,” The Data Furnace: Heating Up with Cloud Computing” added a new perspective to this discussion. They propose that servers, dubbed as Data Furnaces (DF’s), should be distributed to office buildings and homes where they would act as a primary heat source.

According to them the problem of heat generation in data centers can be turned into an advantage: computers can be placed directly into buildings to provide low latency cloud computing for its offices or residents, and the heat that is generated can be used to heat the building. By piggy-backing on the energy use for building heating, the IT industry could grow for some time in size without increasing its carbon footprint or its load on the power grid and generation systems.

The Data Furnaces (DFs) would be micro-datacenters on the order of 40 to 400 CPUs that would be integrated into the house/office building’s heating system. Examining a number of climate zones in the U.S. and operational scenarios, the researchers say that when compared to the US$400 cost per year for a server in a conventional data center, the estimated savings per DF per year range from $280 to $324.

In the paper, the researcher focused on homes as an illustrating example. But as stated by them similar approach could be used to heat water tanks, office buildings, apartment complexes, vegetable farms, and large campuses with central facilities. See also the bog entries on nano data centers   and the OZZO distributed data centers concept. As for the combination of greenhouse farming and data centers (symbiotic data centers) have a look at the site of Parthenon a company in the Netherlands that is working on this concept.

IT infrastructure and energy infrastructure get more and more intertwined.

Datacenters, big is ugly?

OZZO data centerIn the book “The Big Switch” Nicholas Carr makes a historical analysis to build the idea that the data centers/ Internet is following the same developmental path as electric power did 100 years ago. At that time companies stopped generating their own power and plugged into the newly built electric grid that was fed with electric energy by huge generic power plants. The big switch is between today’s proprietary corporate data centers to what Carr calls the world wide computer, basically the cloud with some huge generic data centers that provides web services that will be as ubiquitous, networked and shared as electricity now is. This modern cloud computing infrastructure is following the same structure as the electricity infrastructure: the plant (data center), transmission network (Internet) and distribution networks (MAN, (W)LAN) to give process power and storage capacity to all kind of end-devices. A nice metaphor but is the metaphor right? Is the current power grid architecture able to accommodate the ever rising energy demands?  And by taking the current power grid architecture as an example for the IT infrastructure architecture do we really get a sustainable, robust IT infrastructure? Not everybody is following this line of reasoning.

Instead of relying only on centralized data centers there is another solution, another paradigm, that is much more focussing on an intelligent localized delivery of service, the nano data center as discussed in an earlier blog entry. These two kind of solutions can even be mixed in a hybrid service model where a macro, centralized, delivery model works together with a localized delivery model using intelligent two-way digital technology to control power supply. An example of this hybrid approach is developed in Amsterdam by the OZZO project. The OZZO Project’s mission is to ‘Build an energy-neutral data center in the Amsterdam Metropolitan Area before the end of 2015. This data center produces by itself all the energy it needs, with neither CO2 emission nor nuclear power.’

According to OZZO the data center should function within smart, three-layer grid: for data, electrical energy, and thermal energy. These are preconditions, as is full encryption of all data involved for security and privacy reasons. Possible and actual energy generation and reuse at a given point in the grid serve as drivers for data center or node allocation, size, capacity, and use. Processing and storage move fluidly over the grid in response to real-time local facility and energy intelligence, always looking for optimum efficiency.

The motto of OZZO is ‘Data follows energy’. In there HotColdFrozenData(™) concept, an intelligent distinction is made between high-frequency use data and low-frequency use data. On average, offices and individuals use and change 11% of their data intensively, i.e., every day (hot); 15% is seldom accessed (cold); and 74% is practically never looked at any more (frozen). Special software can classify data streams real-time. After classification and segmentation, data is deduplicated, consolidated, and stored separately on appropriate media. Data can change classifications from hot to cold to frozen, but frozen data can also become hot at times.

In this way a sustainable Information Smart Grid is built based on several kinds of nodes. OZZO is not following the evolutionary path as described by Nicholas Carr. That is a traditional scale up of capacity by centralization and a simple-minded reach for economy of scale neglecting the tradeoffs of growing management complexity of the central node and capacity issues of the network (Internet and the power grid). In answering the question ‘Build an energy-neutral data center’ or ‘How to eat an elephant’ OZZO chooses for a divide-and-conquer strategy. By creating a new architecture with different types of nodes (‘data centers’) they want to create a sustainable distributed grid and take care of the issues that accompanies a centralization approach.

Beyond the clouds: Nano Data Centers

To sell and/or explain the concept of Cloud Computing, the rise of Cloud Computing is much compared with the history of industrial power supply.

By the end of the nineteenth century, (geographically) restrictive and inflexible direct connection of manufacturing machines to local power plants such as waterwheels, windmills, and steam engines gave way to electrically powered machinery getting its power through power lines from far away power plants. The shape and character of factories changed dramatically during the twentieth century, as electrical powered devices could be sited almost anyplace, anywhere. By centralizing power supply, benefits for economies of scope and economies of scale were claimed. Nowadays this modern electrical power infrastructure is composed out of several standard service blocks: power plant, power transmission and power distribution networks to give electrical power to all kind of end-devices.

In the delivery of compute process power and storage capacity we see a same kind of development: local, private data centers with proprietary infrastructure solutions are transforming to huge centralized generic data centers, that offers a (semi-) public utility service, to fulfill an economy of scale promise and to transform capital expenditure to operational expenditure. This modern cloud computing infrastructure is composed out of several standard service blocks: data center, transmission network (Internet) and distribution networks (MAN, (W)LAN) to give process power and storage capacity to all kind of end-devices.

But the metaphor doesn’t end here …

Smart grid

A smart grid delivers electricity from suppliers to consumers using two-way digital technology to control appliances at consumers’ homes to save energy, reduce cost and increase reliability and transparency. It overlays the electricity distribution grid with an information and net metering system. Such a modernized electricity network is being promoted by many governments as a way of addressing energy independence, global warming and emergency resilience issues. Smart meters may be part of a smart grid, but alone do not constitute a smart grid. A smart grid includes an intelligent monitoring system that keeps track of all electricity flowing in the system. …. When power is least expensive the user can allow to the smart grid to turn on selected home appliances such as washing machines or factory processes that can run at arbitrary hours. At peak times it could turn off selected appliances to reduce demand.” (source: Wikipedia).

And one step beyond …

µGrid

A µGrid or micro-grid is a localized grouping of electricity sources and loads that normally operates connected to and synchronous with the traditional centralized grid, but can disconnect and function autonomously as physical and/or economic conditions dictate.” (source: Wikipedia)

So instead of relying only on centralized power plants there is another solution, another paradigm, that is much more focussing on an intelligent localized delivery of service. These two kind of solutions can even be mixed in a hybrid service model where a macro, centralized, delivery model works together with a localized delivery model using intelligent two-way digital technology to control power supply.

Back to cloud computing

Historically, information processing has relied on a client-server model with continuous shifting of the percentage of work that was done by the client or the server (depending on the technical possibilities of that moment). This model has shaped all applications such as the web, electronic mail messaging, ERP, CRM, etc. Cloud computing is in that sense nothing more and nothing less than bringing the client-server model on another level or scale: transforming the client-server model to a client- data center model. But if we compare cloud computing with the developments in power supply don’t we forget a solution? What about the intelligent localized process power and storage capacity unit, the IT µGrid or the smart data center grid that brings the macro data center grid and the µ – data center grid together?

There are already some very familiar solutions available that are heading to the direction of a smart grid, grid computing (such as the BOINC initiative) for processing CPU bounded, number crunching programs or peer-to-peer (P2P) networks for content delivery. Of course these are just PC’s but if we scale up and replace the word PC’s for Data Centers or Data Closets?

Although the airplay is rather small until now, their is an initiative for a managed peer-to-peer model to form a distributed data center infrastructure. This concept is called Nano Data Centers (NaDa). NanoDataCenters is a European Union research program as part of the so called  Seventh Framework Program (FP7).  According to the website of the project: “The NADA architecture is a new distributed computing paradigm that relies on small (“nano”) sized interconnected data centres spread along the network edges. The architecture aims to address the concerns and the shortcomings of monolithic datacenters that are the present day norm.”  NaDa (Nanodatacenters) is the next step in data hosting and in the content distribution paradigm. By enabling a distributed hosting edge infrastructure, NaDa can enable the next generation of interactive services and applications to flourish, complementing existing data centres and reaching a massive number of users in a much more efficient manner. The NaDa objective is to tap into these underutilised resources at the edge and use them as a substitute/aid to expensive monolithic data centers.” Several project results are already available  and can be download.

In delivering process power and storage capacity there are apparently two opposite approaches, “bigger is better” and “small is beautiful”. The current “bigger is better” model of cloud computing leads, although shifted from customer to supplier, still to enormous capital expenditures, problems in power usage and cooling, power supply (Critical Areas for Transmission Congestion) and leads also to some structural vulnerabilities in terms of resiliency and availability of the infrastructure. The alternative p2p data center approach leads to questions about delivering enough “horse power”, network capacity, network supply and the governance of such a distributed system.

Looks like the comparison between power infrastructure and IT infrastructure is still very interesting so paying close attention to the developments in smart grids and µGrids is useful.

Bookmark and Share