Energy Efficient Cooling for the Server Room
Electronics have grown much hotter over the past decade, making cooling a top priority in data centers like the one pictured above.
While today's computer processors running large enterprises are smaller, faster, and far more powerful than those of just a few years ago, they are also much hotter and as such are overwhelming cooling systems and utility budgets. In fact, excessive heat was the No. 1 facilities concern among data center managers in a 2006 survey by IT consultancy, Gartner Group, Stamford, Conn.
Data centers housing microprocessor-based servers typically consist of row after row of six-to-seven-foot computer racks that could hold about 40 pizza box-shape servers and even more circuit boards. But the servers generate far too much heat to fill a rack to capacity. According to Steve Sams, vice president of site and facilities services for IBM Global Technology Services, a fully loaded rack generates about 32 thermal. A survey of 19 computer rooms measuring a total of 204,000 sq. ft. by Uptime Institute Inc., a Santa Fe, N.M.-based information technology reliability organization, found that, in practice, companies loaded on average only enough hardware to generate 2.1kW of heat per rack. Even so, most data rooms ran too hot.
Although data rooms use sophisticated cooling systems, Uptime said only 8% of the chilled air actually reaches the computer equipment. Most bypasses the servers entirely, either escaping through unsealed cable holes, conduits, and floor vents or obstructed by nests of wires under the floor. Few centers are free of hot spots, the areas where temperatures exceed requirements, and servers typically respond by cutting power consumption, in turn reducing performance.
In the past, data room managers simply added more cooling or moved servers to larger rooms, but higher power costs and tighter budgets have eliminated some of these options. Although the rate of heat buildup may be leveling off, solutions still are needed.
Just about every provider of server racks and IT services wants a piece of the lucrative data center renovation, expansion, and relocation market that exceeds tens of billions of dollars. One company tackling the problem is American Power Conversion Corp., Newport, R.I., better known for backup power supplies. Instead of running cold air up from the floor as traditionally done, APC supplies cold air from rack-size towers mounted along each row. Each tower is adjusted individually to achieve optimal cooling in its vicinity and is so efficient that users can boost server rack power to 18kW—nearly nine times the average found by Uptime. Another innovation involving carrying hot air away rather than releasing it into the room adds enough cooling capacity to handle a 30kW bank of servers, APC said.
"Ten and 15 years ago, we were generating only 1kW to 3kW of heat in a rack-size space,” said Sams. “Today, it's 20kW and maxes out at 32kW. We have to be cognizant of what we put where or we will melt the servers in the rack."
Like APC, IBM also has a racking system capable of handling 20kW to25 kW heat loads. Instead of adding in-line air conditioners, IBM redirects the data center's air conditioning through the rack itself. Like APC, it encloses its racks with a roof, but unlike APC it uses a cold rather than hot center aisle and exhausts the heated air into the data center. "We believe the design is 40% to 50% better than [earlier] racking," Sams said. In addition, IBM provides power management software that enables IT managers to adjust power and heat output to direct it to where it’s needed. For IT departments seeking to cut power bills and stretch existing assets, such solutions can't come too soon.
[Adapted from “Too Hot for Comfort,” by Alan S. Brown, Associate Editor, Mechanical Engineering, December 2006.]
Ten and 15 years ago, we were generating only 1kW to 3kW of heat in a rack-size space,” said Sams. “Today, it's 20kW and maxes out at 32kW."Steve Sams, VP of site and facilities services, IBM Global Technology Service