Since the days of Seymour Cray, liquid cooled computing has been known to be the coolest cooling process – think about a nice stroll in 70F weather compared to a freezing swim in 70F water. Why is water so much cooler? Because the water cools you off 24 times more efficiently than does air.
Computer hobbyists have been using liquid cooling for some time, so why don’t all servers use liquid cooling? Possibly the largest barrier to adoption of liquid cooled servers is the lack of reliable designs and the challenges of delivering liquid cooling to standard racks in a datacenter. The standard industry thinking assumes that the only option is to deploy inherently inefficient air-cooled servers.
Today, due to the difficulty in transferring heat via air exchange, we have a legacy of inefficient and costly server room designs. The server room uses computer room air conditioner (CRAC) units that cool the air and push it out under a raised floor, through perforated tiles, and into the aisles of server racks. Finally air is sucked into the actually servers by five to ten high speed internal fans that push air over the computer components. Not only is the air-cooled server room very noisy, it also requires a lot of upfront cost and expends a lot of energy chilling water and pushing air. And what if an internal server fan fails? A failed internal fan creates a higher internal server temperature which has a direct correlation to an increase in the failure rate of other server components such as disk drives and memory modules. Air-cooled servers are sufficient to today’s needs, but we can do much better.
With LiquidMips, liquid cooling for servers is coming into its own. LiquidMips servers use the best practices of both liquid-immersion and cold-plate cooling to transport heat from computer components into a fluid that carries heat out of the computer enclosure. LiquidMips servers have no moving cooling components located inside the case. Heat is transferred from the computer components to an externally cooled fluid via a fluid-to-fluid heat exchanger that is chilled by a standard chiller system, a cooling tower, or geothermal cooling loops. In a typical server facility today, the entering water temperature of a server room air conditioner must be 50F (10C) to get an outlet air temperature of 60F (16C) to get a server inlet temperature of 77F (25C). Liquid cooling will maintain those same internal server temperatures with only 86F (30C) water inlet temperature. Thus, liquid cooling will allow a 36F (20C) increase in chilled inlet water temperature with no corresponding increase in server component temperature. Maintaining the transport of heat in a liquid medium to the ultimate point of heat rejection dramatically increases the cooling efficiency and more importantly, allows much more latitude in both the cooling source as well the ultimate server component temperatures in order to optimize datacenter operations for designed loads.
In order to accommodate transitional datacenter models, options will be available that allow groups of LiquidMips servers to use an end-of-row cabinet fluid-to-air heat exchanger for use in both traditional office and server room settings as well in the newer ambient air cooled datacenters.
Cool. Silent. Reliable.
Built to Virtualize.
Virtualized server systems fit hand-in-glove with LiquidMips sealed servers. LiquidMips servers are optimized as hosts for virtualized computing and the software defined datacenter. In plain language, virtualized computing creates a “software computer” that runs on any available physical computer hardware. These virtual software computers may be started, stopped, duplicated, or moved to another physical computer at the press of a button. Virtualization significantly improves the quality and availability of service to users. When physical computer hardware fails, the virtual software computers are simply transitioned to another available physical computer. With virtualization, fail-in-place becomes a practical strategy to deal with hardware failures and reduces operational resources and costs required for reliable data center operation.
Cool to Last.
Excessively high temperatures are excessively unfriendly to your servers. In fact, cooler servers last longer. In order to reduce server room cooling costs, the industry is working to build components that are more tolerant of heat so that the hot aisle of a server room can be 122+F (50+C). The gotcha for datacenters that take the high temperature route is to ensure that lower cooling operating costs do not create server failures that requires additional capital expenditures. It would be a shame to save money on cooling and spend all that savings and more on new servers to replace the ones that suffered heat death. While studies have shown that “your results may vary”, it has been long established that electronics will have a longer lifetime at cooler operating temperatures.
Liquid cooling does a much better job at controlling the amount and consistency of heat experienced by servers. In traditional air-cooled designs, the temperature of a disk drive at the front of the server may be 80F (27C) and a disk drive in the back may be 99F (37C). Servers lower in the rack or closer to cooling units will be cooler than servers at the top of the rack or racks in the center of the datacenter. In a liquid cooled server, the temperatures see very little variation within the case, the rack, or the datacenter. This allows much more control over efficiencies by raising server temperatures without causing hardware component failures.
Interestingly, various types of servers or even components within servers can be cooled to different temperatures to optimize performance, energy usage, and component life. Servers that are primarily compute have a much higher tolerance for heat than do disk drives. LiquidMips servers allow you to generally organize servers into separate cooling loops that have different temperature set points. Compute servers might be allowed to run at 140F (50C) while storage servers are held to 86F (30C) in order create cooling efficiencies as well as extend the lifetime of spinning storage. Try that in an air-cooled datacenter.