In September, The New York Times ran a provocative front-page story about the data centers that power the Internet. “A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness," the story said.
In fact, the data centers that store our YouTube videos, run our Google searches, and process our eBay bids use about 2 percent of all electricity in the nation.
The Times reported that, in some data centers, up to 90 percent of this electricity is simply wasted.
The story was no doubt an eye-opener for the 51 percent of Americans who were under the assumption that “the cloud" has something to do with the weather. Far from some meteorological phenomenon, the cloud is in fact a massive collection of warehouses jammed with rows and rows of power-sucking machines.
But once you’ve gotten past the fundamental realization that the cloud is a hulking, polluting, physical thing, there’s another story to tell. It’s the one about how some of the more forward-thinking Internet companies are coming up with wildly creative ways to cut down on all that waste. Facebook is building its latest data center at the edge of the Arctic Circle. An industry consortium is sponsoring a “server roundup" and handing out rodeo belt buckles to the Internet company that can take the largest number of energy-leeching comatose servers offline. And Google has saved huge amounts of energy by allowing its data center workers to wear shorts and T-shirts.
Why shorts and T-shirts? Well, let’s back up. There are two main ways that server farms hog power. One is running the servers. The other is keeping them cool so they don’t overheat and crash. Amazingly, many data centers expend as much — or more — energy on cooling as on computation. The Uptime Institute, a private consortium that tracks data-center industry trends, estimates that companies a decade ago spent an average of 1.5 times as much energy cooling their servers as running them. These days that figure is closer to 80 or 90 percent — a big improvement, though still ugly. But these statistics do not count only the big Internet companies whose servers make up what we usually think of as the cloud. Rather, they’re weighed down by smaller and less-Internet-focused firms that run their own data centers, often using antiquated equipment and discredited practices.
By contrast, Google’s state-of-the-art data centers use, on average, just 12 percent as much energy to cool their servers as they do to power them. How does Google do it? I spoke with Joe Kava, the company’s vice president of data centers, to find out. He says the company has improved the layout of its data centers by using precise testing to figure out the exact times and locations at which energy is being lost. The fundamental principle is to keep hot air separate from cold air. The more they mix, the more energy you waste. Data centers typically do this by creating “hot aisles" behind the servers and “cold aisles" in front. Kava says Google quickly realized that a lot of heat was escaping the hot aisles when technicians had to go into them to work on the machines. So it began customizing its servers to put all the plugs on the front. Now you can fix them all from the cold aisle.
At Google, though, the term “cold aisle" is something of a misnomer.
“There’s a fallacy that data centers have to be like meat lockers — they have to be cold when you walk in," Kava says. “And it’s absolutely not true. The servers all run perfectly fine at much warmer temperatures." Instead of keeping its data centers at the traditional 60 to 65 degrees Fahrenheit, Google’s often run at a balmy 80 degrees. That’s why its technicians now work in shorts and T-shirts. Keeping the temperature a little higher has also allowed the company to switch from using giant, power-hungry chillers to an evaporative “free cooling" system that relies on outside air. On the few days when it’s too hot for free cooling, the company can switch to chillers temporarily — or simply shift its computing load to a different data center. Google’s data center in Belgium was among the first anywhere built without any chillers at all.
Other companies have begun to experiment with data centers in frigid climates. Facebook, for instance, is building its newest facility in Lulea, Sweden, just 25 miles south of the Arctic Circle. The temperature there has not risen above 86 Fahrenheit for more than 24 hours since 1961, according to the Telegraph. There are limitations to that strategy — data centers are most effective when they’re located near large population centers — and cold air brings its own challenges for server maintenance. But the location, on the site of an old paper mill, has the environmental advantage of being on a river that produces plenty of relatively clean hydroelectric power.
Google’s center in chilly Hamina, Finland, also draws on hydroelectricity, and it’s cooled with water from the adjacent Baltic Sea.
Big companies like Google and Facebook have an advantage when it comes to data-center efficiency, thanks to deep pockets, economies of scale and the fact that they often have thousands of servers doing essentially the same thing. Forbes’ Dan Woods argued persuasively that the Times piece’s biggest error was to conflate these big Internet companies’ server farms with the much smaller (and necessarily less efficient) data centers run by the IT departments of non-Internet companies. Much of what the Times called “waste" is actually essential redundancy for companies that can’t afford a total server crash.
Still, experts say less-cutting-edge firms can also clean up their acts significantly. When Google first began reporting its eye-popping power usage effectiveness numbers in 2008, Kava recalls, “people said, ‘Wow, that’s amazing, only Google could do that.’ So then we started publishing a whole series of white papers on how you could apply some very simple and cost-effective techniques to any data center."