Thursday, January 21, 2010

The real deal on greening your data center

The computing models and energy-saving practices that will reap the biggest rewards
By Matthew D. Sarrel @eweek.com
Alot of attention these days is being devoted to going green: Save the planet, buy a hybrid, recycle, put lights on timers, don’t waste paper and so on. All of these things will help the environment, but let’s come right out and say it: Going green makes sense when a business saves capital and resources by doing so. A warm feeling at night is not a compelling business reason for going green, but saving millions of dollars on power and HVAC sure is.

Indeed, many businesses have saved significantly by implementing environmentally friendly practices and trimming power consumption. In 2009, organizations including IBM, Sun, the National Security Agency, Microsoft and Google announced that they were building green data centers.
The most recent announcement comes from IBM, regarding what it claims is the world’s greenest data center—a project jointly funded by IBM, New York state and Syracuse University. Announced in May 2009 and constructed in just over six months, the $12.4 million, 12,000-square-foot facility (6,000 square feet of infrastructure space and 6,000 square feet of raised-floor data center space) uses an on-site power generation system for electricity, heating and cooling, and incorporates IBM’s latest energy-efficient servers, computer-cooling technology and system management software.


The press release is filled with all sorts of flowery language about saving the planet and setting an example for others to follow, but about three-fourths of the way through we get to the bottom line: “This is a smart investment … that will provide much needed resources for companies and organizations who are looking to reduce both IT costs and their carbon footprint.”

How can you separate the wheat from the chaff when it comes to designing a green data center? Where does the green-washing end and the true business case begin?
The first thing to do is to understand several key principles of data center design. This ensures that you maintain a focus on building a facility that serves your organization’s needs today and tomorrow. Build for today and for the future. Of course, you don’t know exactly which hardware and software you’ll be running in your data center five years from now. For this reason, you need a flexible, modular and scalable design. Simply building a big room full of racks waiting to be populated doesn’t cut it anymore.

Types of equipment—such as storage or application servers—should be grouped together for easier management. In addition, instead of cooling one huge area that is only 25 percent full, divide the facility into isolated zones that get populated and cooled one at a time. Most data centers incorporate a hot aisle/cold aisle configuration, where equipment racks are arranged in alternating rows of hot and cold aisles. This practice allows air from the cold aisle to wash over the equipment; the air is then expelled into the hot aisle. At this point, an exhaust vent pulls the hot air out of the data center.

It’s important to measure energy consumption and HVAC. Not only will this help you understand how efficient your data center is (and give you ideas for improving efficiency), but it will also help control costs in an environment of ever-increasing electricity prices and put you in a better position to meet the increased reporting requirements of a carbon reduction policy.

Rack density is a very important aspect of modern data center design. Server consolidation and virtualization are leading us toward denser, and fewer, racks. Blades and 1U to 3U servers are the norm. The denser the data center, the more efficient it can be, especially if we’re talking in terms of construction costs per square foot.

However, denser racks mean increased power requirements and the generation of more heat. In the past, a rack might consume 5 kW, whereas today’s denser designs consume 20 kW or more. Conventional HVAC solutions could be used to cool a 5-kW rack, but a 20-kW (or even 30- or 40-kW) rack requires a high-density cooling solution, as well. Look to implement rack-level cooling technologies using either water or forced air. The IBM/Syracuse project converts exhaust heat to chilled water that is then run through cooling doors on each rack. A high-density cooling
solution such as this removes heat much more efficiently than a conventional system. A study conducted by Emerson in 2009 calculated that roughly 35 percent of the cost of cooling the data center is eliminated by using such a solution.

No more raised floor
Believe it or not, 2010 will toll the death knell for the raised floor. As hot air rises, cool air ends up below the raised floor, where it isn’t doing much good. In addition, raised floors simply can’t support the weight demands placed on them by high-density racks. A 42u rack populated with 14 3u servers can weigh up to 1,000 pounds.

Raised floors are simply not efficient operationally. I had the experience many years ago of building a 10,000-foot data center in a large city. Several months after it was built, we began to have intermittent network outages. It took many man-hours to locate the problem: Rats were chewing through the insulation on cables run below the raised floor. Rats aside, additions, reconfigurations and troubleshooting of the cable plant are much easier on your staff when cables are in plain sight.

Many organizations have found that keeping the server room at 68 or even 72 degrees can yield immediate and meaningful cost savings. As much as I like working in a 62-degree room, newer equipment is rated for a higher operating temperature. Check the manufacturer’s specifications on existing equipment before raising the temperature and monitor performance and availability afterward.
Finally, consider switching power from AC to DC, and from 110V to 220V. Power typically starts at the utility pad at 16,000 VAC (volts alternating current), and is converted multiple times to get to 110 VAC to power equipment. It is then converted internally to 5 VDC (volts direct current) and 12 VDC. All of this conversion wastes up to 50 percent of electricity and generates excess heat.

As the use of DC power gains some traction in data centers, many server manufacturers—including HP, IBM, Dell and Sun—are making DC power supplies available on some or all of their server lines, allowing the machines to run on 48 VDC. Look for server chassis that utilize modular power supplies to make the switch from AC to DC easier.

Matthew D. Sarrel is executive director of Sarrel Group, an IT test lab, editorial
services and consulting firm in New York.

1 comment: