Showing posts with label Green Tech. Show all posts
Showing posts with label Green Tech. Show all posts

Thursday, February 18, 2010

Databases and power management, not a perfect fit

source: anandtech.com
In our last article, I showed that the current power management does not seem to work well with the Windows Scheduler. We got tons of interesting suggestions and superb feedback. Also several excellent academic papers from two universitities in Germany which confirm our findings and offer a lot of new insights. More about that later.The thing that is really haunting me once again is that our follow up article is long overdue. And it is urgent, because some people feel that the benchmark we used undermines all our findings. We disagree as we chose the Fritz benchmark not because it was realworld, but because it let us control the amount of CPU load and threads so easily. But the fact remains of course that the benchmark is hardly relevant for any server. Pleading guilty as charged.

So how about SQL Server 2008 Enterprise x64 on Windows 2008 x64? That should interest a lot more IT professionals.We used our "Nieuws.be" SQL Server test, you can read about our testing methods here. That is the great thing about the blog, you do not have to spend pages on benchmark configuration details :-). Hardware configuration details: a single Opteron 2435 2.6 GHz running in the server we described here. This test is as real life as it gets: we test with 25, 50, 100 and so on users which fire off queries with an average rate of one per second. Our vApus stresstest makes sure that all those queries are not sent at the same time but within a certain time delta, just like real users. So this is much better than putting the CPU under 100% load and measuring maximum throughput. Remember in our first article, we stated that the real challenge of a server was to offer a certain number of users a decent responsetime, and this preferably at the lowest cost. And the lowest cost includes the lowest power consumption of course.
While I keep some of the data for the article, I like to draw your attention to a few very particular findings when comparing the "balanced" and "performance" power plan of Windows 2008. Remember the balanced performance plan is the one that should be the best one: in theory it adapts the frequency and voltage of your CPU to the demanded performance with only a small performance hit. And when we looked at the throughput or queries per second figures, this was absolutely accurate. But throughtput is just throughput. Response time is the one we care about.
Let us take a look at the graph below. The response time and power usage of the server when set to performance (maximum clock all the time) is equal to one. The balanced power and response time are thus relative to the numbers we saw in performance. Response time is represented by the columns and the first Y-axis (on the left), Power consumption is represented by the line and by the second Y-axis (on your right).
The interesting thing is that reducing the frequency and voltage never delivers more than 10% of power savings. One reason is that we are testing with only six-core CPU. The power savings would be obviously better when you look at a dual or even quad CPU system. Still, as the number of core per CPU increases, systems with less CPUs become more popular. If you have been paying attention to what AMD and Intel are planning in the next month(s), you'll notice that they are adapting to that trend. You'll see even more evidence next month.
What is really remarkable is that our SQL Server 2008 server took twice as much time to respond when the CPU is using DVFS (Dynamic Voltage Frequency Scaling) than when not. It clearly shows that in many cases, heavy queries were scheduled on cores which were running at a low frequency (0.8 - 1.4 GHz).

I am not completely sure whether or not CPU load measurements are completely accurate when you use DVFS (Powernow!), but the CPU load numbers tell the same story.
The CPU load on the "balanced" server is clearly much higher. Only when the CPU load was approaching 90%, was the "balanced" server capable of delivering the same kind of performance as when running in "performance" mode. But then of course the power savings are insignificant. So while power management makes no difference for the number of users you can serve, the response time they experience might be quite different. Considering that most servers run at CPU loads much lower than 90%, that is an interesting thing to note.

Read More..

Thursday, January 21, 2010

The real deal on greening your data center

The computing models and energy-saving practices that will reap the biggest rewards
By Matthew D. Sarrel @eweek.com
Alot of attention these days is being devoted to going green: Save the planet, buy a hybrid, recycle, put lights on timers, don’t waste paper and so on. All of these things will help the environment, but let’s come right out and say it: Going green makes sense when a business saves capital and resources by doing so. A warm feeling at night is not a compelling business reason for going green, but saving millions of dollars on power and HVAC sure is.

Indeed, many businesses have saved significantly by implementing environmentally friendly practices and trimming power consumption. In 2009, organizations including IBM, Sun, the National Security Agency, Microsoft and Google announced that they were building green data centers.
The most recent announcement comes from IBM, regarding what it claims is the world’s greenest data center—a project jointly funded by IBM, New York state and Syracuse University. Announced in May 2009 and constructed in just over six months, the $12.4 million, 12,000-square-foot facility (6,000 square feet of infrastructure space and 6,000 square feet of raised-floor data center space) uses an on-site power generation system for electricity, heating and cooling, and incorporates IBM’s latest energy-efficient servers, computer-cooling technology and system management software.


The press release is filled with all sorts of flowery language about saving the planet and setting an example for others to follow, but about three-fourths of the way through we get to the bottom line: “This is a smart investment … that will provide much needed resources for companies and organizations who are looking to reduce both IT costs and their carbon footprint.”

How can you separate the wheat from the chaff when it comes to designing a green data center? Where does the green-washing end and the true business case begin?
The first thing to do is to understand several key principles of data center design. This ensures that you maintain a focus on building a facility that serves your organization’s needs today and tomorrow. Build for today and for the future. Of course, you don’t know exactly which hardware and software you’ll be running in your data center five years from now. For this reason, you need a flexible, modular and scalable design. Simply building a big room full of racks waiting to be populated doesn’t cut it anymore.

Types of equipment—such as storage or application servers—should be grouped together for easier management. In addition, instead of cooling one huge area that is only 25 percent full, divide the facility into isolated zones that get populated and cooled one at a time. Most data centers incorporate a hot aisle/cold aisle configuration, where equipment racks are arranged in alternating rows of hot and cold aisles. This practice allows air from the cold aisle to wash over the equipment; the air is then expelled into the hot aisle. At this point, an exhaust vent pulls the hot air out of the data center.

It’s important to measure energy consumption and HVAC. Not only will this help you understand how efficient your data center is (and give you ideas for improving efficiency), but it will also help control costs in an environment of ever-increasing electricity prices and put you in a better position to meet the increased reporting requirements of a carbon reduction policy.

Rack density is a very important aspect of modern data center design. Server consolidation and virtualization are leading us toward denser, and fewer, racks. Blades and 1U to 3U servers are the norm. The denser the data center, the more efficient it can be, especially if we’re talking in terms of construction costs per square foot.

However, denser racks mean increased power requirements and the generation of more heat. In the past, a rack might consume 5 kW, whereas today’s denser designs consume 20 kW or more. Conventional HVAC solutions could be used to cool a 5-kW rack, but a 20-kW (or even 30- or 40-kW) rack requires a high-density cooling solution, as well. Look to implement rack-level cooling technologies using either water or forced air. The IBM/Syracuse project converts exhaust heat to chilled water that is then run through cooling doors on each rack. A high-density cooling
solution such as this removes heat much more efficiently than a conventional system. A study conducted by Emerson in 2009 calculated that roughly 35 percent of the cost of cooling the data center is eliminated by using such a solution.

No more raised floor
Believe it or not, 2010 will toll the death knell for the raised floor. As hot air rises, cool air ends up below the raised floor, where it isn’t doing much good. In addition, raised floors simply can’t support the weight demands placed on them by high-density racks. A 42u rack populated with 14 3u servers can weigh up to 1,000 pounds.

Raised floors are simply not efficient operationally. I had the experience many years ago of building a 10,000-foot data center in a large city. Several months after it was built, we began to have intermittent network outages. It took many man-hours to locate the problem: Rats were chewing through the insulation on cables run below the raised floor. Rats aside, additions, reconfigurations and troubleshooting of the cable plant are much easier on your staff when cables are in plain sight.

Many organizations have found that keeping the server room at 68 or even 72 degrees can yield immediate and meaningful cost savings. As much as I like working in a 62-degree room, newer equipment is rated for a higher operating temperature. Check the manufacturer’s specifications on existing equipment before raising the temperature and monitor performance and availability afterward.
Finally, consider switching power from AC to DC, and from 110V to 220V. Power typically starts at the utility pad at 16,000 VAC (volts alternating current), and is converted multiple times to get to 110 VAC to power equipment. It is then converted internally to 5 VDC (volts direct current) and 12 VDC. All of this conversion wastes up to 50 percent of electricity and generates excess heat.

As the use of DC power gains some traction in data centers, many server manufacturers—including HP, IBM, Dell and Sun—are making DC power supplies available on some or all of their server lines, allowing the machines to run on 48 VDC. Look for server chassis that utilize modular power supplies to make the switch from AC to DC easier.

Matthew D. Sarrel is executive director of Sarrel Group, an IT test lab, editorial
services and consulting firm in New York.

Read More..

Tuesday, January 12, 2010

Self-assembling solar arrays as easy as mixing oil and water


source: arstechnica.com
Modern manufacturing techniques generally require high degrees of control and intervention to get materials linked together in precise configurations. But researchers have become interested in the prospect of self-assembling systems, which can simplify existing manufacturing and allow us to produce devices on the nanoscale. Above a certain size it's possible to use gravity to drive self-organization; on the nanoscale it's possible to use chemical processes, like the base pairing of DNA, to drive the assembly process. That leaves an awkward range of devices on the micrometer scale in between that aren't heavy enough for gravity to drive assembly, but too big to be pushed around by substances like DNA. A paper that will appear in PNAS describes how it's possible to use an oil-water interface to drive the self-assembly of 20 micron silicon solar chips into a functional array.


To give some context, this is a problem that goes well beyond academic interest. The authors, Robert Kneusel and Heiko O. Jacobs, note that the majority of silicon in a typical photovoltaic cell isn't active—it's there to provide structural support. And, although silicon isn't expensive compared to many metals, there are certainly cheaper materials out there that could replace it, lowering the cost of devices. It should also be possible to incorporate small photovoltaic chips into flexible and transparent materials, much as was done with LEDs, which could greatly increase the places where solar devices could be deployed.

The photovoltaic devices used in this case are small silicon cubes, 20-60µm on a side, and have a gold contact on one face. The authors referred to these as "chiplets." These are light enough that the force imparted by gravity is quite small, but heavy enough that brownian motion shouldn't be a major problem. The authors then set about creating a set of conditions where the free energy of the two interfaces (silicon and gold) should dominate the behavior. So they coated the gold surface with an organic acid to make it hydrophilic, and used an organic methoxy-silane reaction to enhance the hydrophobic properties of the silicon.

When placed in an oil-water mixture, these modified chiplets will self-organize into tightly packed arrays at the interface between them, driven by the free energy. Put in terms of milliJoules per square meter, the gold surface-water interactions are favored by roughly -55mJ/m2. The silicon surface prefers interacting with oil by another 7mJ/m2, making the spontaneous organization quite favorable, and far stronger than the forces of gravity or Brownian motion.

Of course, it's not simply feasible to leave the chiplets floating in a solution—they need to be hooked up to a conducting surface if the electricity they produce is to be harvested. Here, things were quite easy; provided the mixture was kept at 95°C, it's possible to find solders that will remain molten. Interactions between the gold and the solder is a whopping 400mJ/m2 more favorable than the water-gold interactions, meaning the chiplets should spontaneously link up with the solder and displace any water between them.

The actual process the authors developed involved creating a propylene-terepthalate (PET) polymer surface covered in a thin copper sheet. Squares the size of the chiplets were etched into the PET, and the exposed copper was coated with solder simply by dipping the sheet in a bath of it. With the solder in place, the sheet was simply drawn slowly through the oil/water/chiplet mixture, and the chiplets spontaneously occupied the solder-filled holes. The authors were able to fill about 98 percent of the surface with chiplets by making several passes through the oil-water mix at 30mm/second. That may not sound like it was that fast, but they were able to assemble about 62,000 chiplets in three minutes.

The self assembly process (left) and the results (right).
Image courtesy of study author Heiko O. Jacobs.

Once assembled, the researchers simply layered some epoxy on top of the chiplets, locking them into place, and added a second conducting electrode layer. The resulting device operated nearly as efficiently as single, isolated chiplets. The devices could also handle bending without a significant drop in performance—the authors attribute the differences to the fact that bending the device took some of the chiplets out of direct illumination, dropping their power output.

All told, their device reduces the silicon needed in the final product by a factor of 10, largely replacing it with cheap polymers. The authors also demonstrated devices with different spacing, irregular substrates, and triangular chiplets, showing the technique's flexibility. Greater automation of the process, they suggest, could probably improve the yields and speed of manufacture.

It's difficult to tell how this will play out, because the cost-performance ratio in photovoltaics seems to be changing on a weekly basis, with silicon competing with various forms of thin-film metallic materials. Still, it's likely that different technologies will find homes in specialized applications, and a flexible polymer is likely to have advantages in some use cases.

But the approach itself may be as important as this specific result. Some people working in the area have suggested that the problem with photovoltaics won't be manufacturing enough capacity; it will be hooking what we can manufacture into the grid fast enough, a problem that has led to the suggestion that a form of self-assembling solar paint will ultimately be required. This sort of simplified self-assembly may be a step in that direction.
Read More..