Monday, October 15, 2018

 


LiquidMips servers started out as an idea – actually a question, “How do we create reliable data storage for 50+ years?” We tried to use today’s servers but we found out that they don’t work well for long-term data storage and secure computing. Today’s servers require almost as much electricity to cool the systems as it takes to run the systems. They are vulnerable to environment threats such as hurricanes and fires as well as hands-on maintenance errors and human mischief. They require expensive buildings, controlled environments, human access, tall fences and uniformed guard staff to keep them safe and operational. This just doesn’t work for long-term data storage and secure computing.


Our answer?



LiquidMips servers are a new class of server that is designed for software defined data centers. It is liquid-cooled, sealed computing that is designed to be installed in standard server rooms as well as completely non-standard office, warehouse, or remote locations.  LiquidMips servers are the most reliable, secure, and cost-effective servers available and be cooled with liquid to air, liquid to liquid, or even geothermal heat exchangers.

This is all new.  We understand your skepticism, so let us explain …


Liquid cooled servers are really cool.

Since the days of Seymour Cray, liquid cooled computing has been known to be the coolest cooling process – think about a nice stroll in 70F weather compared to a freezing swim in 70F water.  Why is water so much cooler?  Because the water cools you off 24 times more efficiently than does air.


Computer hobbyists have been using liquid cooling for some time, so why don’t all servers use liquid cooling?  Possibly the largest barrier to adoption of liquid cooled servers is the lack of reliable designs and the challenges of delivering liquid cooling to standard racks in a datacenter.  The standard industry thinking assumes that the only option is to deploy inherently inefficient air-cooled servers.


Today, due to the difficulty in transferring heat via air exchange, we have a legacy of inefficient and costly server room designs.  The server room uses computer room air conditioner (CRAC) units that cool the air and push it out under a raised floor, through perforated tiles, and into the aisles of server racks.  Finally air is sucked into the actually servers by five to ten high speed internal fans that push air over the computer components.  Not only is the air-cooled server room very noisy, it also requires a lot of upfront cost and expends a lot of energy chilling water and pushing air.  And what if an internal server fan fails?  A failed internal fan creates a higher internal server temperature which has a direct correlation to an increase in the failure rate of other server components such as disk drives and memory modules.  Air-cooled servers are sufficient to today’s needs, but we can do much better.

With LiquidMips, liquid cooling for servers is coming into its own.  LiquidMips servers use the best practices of both liquid-immersion and cold-plate cooling to transport heat from computer components into a fluid that carries heat out of the computer enclosure.   LiquidMips servers have no moving cooling components located inside the case.  Heat is transferred from the computer components to an externally cooled fluid via a fluid-to-fluid heat exchanger that is chilled by a standard chiller system, a cooling tower, or geothermal cooling loops.  In a typical server facility today, the entering water temperature of a server room air conditioner must be 50F (10C) to get an outlet air temperature of 60F (16C) to get a server inlet temperature of 77F (25C).  Liquid cooling will maintain those same internal server temperatures with only 86F (30C) water inlet temperature.  Thus, liquid cooling will allow a 36F (20C) increase in chilled inlet water temperature with no corresponding increase in server component temperature.  Maintaining the transport of heat in a liquid medium to the ultimate point of heat rejection dramatically increases the cooling efficiency and more importantly, allows much more latitude in both the cooling source as well the ultimate server component temperatures in order to optimize datacenter operations for designed loads.


In order to accommodate transitional datacenter models, options will be available that allow groups of LiquidMips servers to use an end-of-row cabinet fluid-to-air heat exchanger for use in both traditional office and server room settings as well in the newer ambient air cooled datacenters. 

Cool.  Silent.  Reliable.



Built to Virtualize.


Virtualized server systems fit hand-in-glove with LiquidMips sealed servers.  LiquidMips servers are optimized as hosts for virtualized computing and the software defined datacenter.   In plain language, virtualized computing creates a “software computer” that runs on any available physical computer hardware.  These virtual software computers may be started, stopped, duplicated, or moved to another physical computer at the press of a button.  Virtualization significantly improves the quality and availability of service to users.  When physical computer hardware fails, the virtual software computers are simply transitioned to another available physical computer.   With virtualization, fail-in-place becomes a practical strategy to deal with hardware failures and reduces operational resources and costs required for reliable data center operation.



Cool to Last.


Excessively high temperatures are excessively unfriendly to your servers.  In fact, cooler servers last longer.  In order to reduce server room cooling costs, the industry is working to build components that are more tolerant of heat so that the hot aisle of a server room can be 122+F (50+C).  The gotcha for datacenters that take the high temperature route is to ensure that lower cooling operating costs do not create server failures that requires additional capital expenditures.  It would be a shame to save money on cooling and spend all that savings and more on new servers to replace the ones that suffered heat death. While studies have shown that “your results may vary”, it has been long established that electronics will have a longer lifetime at cooler operating temperatures. 

Liquid cooling does a much better job at controlling the amount and consistency of heat experienced by servers.  In traditional air-cooled designs, the temperature of a disk drive at the front of the server may be 80F (27C) and a disk drive in the back may be 99F (37C).  Servers lower in the rack or closer to cooling units will be cooler than servers at the top of the rack or racks in the center of the datacenter.   In a liquid cooled server, the temperatures see very little variation within the case, the rack, or the datacenter.  This allows much more control over efficiencies by raising server temperatures without causing hardware component failures. 

Interestingly, various types of servers or even components within servers can be cooled to different temperatures to optimize performance, energy usage, and component life.  Servers that are primarily compute have a much higher tolerance for heat than do disk drives.  LiquidMips servers allow you to generally organize servers into separate cooling loops that have different temperature set points.  Compute servers might be allowed to run at 140F (50C) while storage servers are held to 86F (30C) in order create cooling efficiencies as well as extend the lifetime of spinning storage.  Try that in an air-cooled datacenter.  


Ground Connected. Even Better.

What if we find a better way to move heat away from servers than big noisy fans and hot air?

So we have warm process water that has been heated by the servers.  What do we do with the heat?  In a standard datacenter configuration, a water chiller removes heat from the process water by using a chiller refrigeration process and rejects that heat via an air-cooled or cooling tower condenser.  In a typical scenario, the chilled water for the air handlers is approximately 40F (25C) cooler than the computer that it is cooling.  This refrigeration process alone uses 30-35% of the total energy required to power a server - that requires a lot of energy.

Computers are not nearly as picky about being hot as are we humans.  Studies have shown that disk drives are reliable and stable at 95F (35C) and motherboard electronics are stable and reliable at 113F (45C) to as high as 140F (60C).  So, let’s save energy by cooling computers only to their optimal “happy” temperature.

What if we could find a cooling source that would dissipate excess server heat and not rise above 95F (35C)?  What if that cooling source didn’t require an expensive refrigeration cycle in order to dissipate the heat?  What if that cooling source is available everywhere – in abundance?

Look down.  You are standing on it.

Geothermal cooling is a process in which water heated by servers is cooled by circulating it through a series of closed loop pipes that are installed in the ground or in a body of water.  As the warm water travels through the pipes, heat is transferred to the ground or water and then returned to pick up more excess heat from the servers.  The geothermal loop is typically made of high-density polyethylene, a tough plastic which allows heat to pass through efficiently and is extraordinarily durable with a useful life span of over 200 years.  Geothermal fields are widely considered one of the most environmentally friendly means of cooling.  Ground temperatures differ by location. For example, the average ground temperature in south Texas is about 72°F whereas the ground temperature in North Dakota is about 42°F. Ground temperatures, lot size, geology, and topography will all impact the length of geothermal loops required for cooling.  In most locations, water-side economizers can be used to take advantage of cold seasonal or daily temperature swings to supplement geothermal cooling.

The geothermal jewel in the rough is this: when a server is directly liquid cooled, it may be effectively cooled without the need for a refrigeration cycle.  This results in a 75-90% reduction in cooling costs that is more environment friendly, less expensive, less complicated, and more reliable than standard chiller or free-air datacenter cooling.  Cool.


LiquidMips Solutions

• Increase server density 3x to 4x the current “high-density” designs 

• Increase physical as well as EMI/RFI security

• Reduce cooling costs by 25% to 95% 

• Reduce labor costs associated with computer service 

• Allow for non-traditional very low cost data center designs 

• Make geothermal cooling of servers practical 

• Allow for lights-out data centers located outside traditional urban areas

Home        How It Works        Order Now        Contact Us