How to Choose And Design Connectors For Liquid Cooling System

Table of Contents

The rapid development of information technology makes the need for efficient heat dissipation in data centers more and more urgent, liquid cooling technology has gradually become a solution to the problem, while the liquid cooling system design is also a very core link, directly related to the safety and reliability of the system. When retrofitting server rooms and servers, it is often necessary to minimize the extent of the retrofit and maintain compatibility with existing air-cooling systems. Cold plate liquid cooling technology has become the mainstream liquid cooling method due to the advantages of high reliability, easy maintenance and low transformation cost.

In the cold plate liquid cooling solution for data centers, the whole cabinet delivery method and the decoupled delivery method are two different business models and engineering implementation methods. Data centers need to consider the differences in demand, cost, operational efficiency, ecological maturity, long-term development planning and other factors to make a choice between the two delivery methods.

The overall cabinet delivery method refers to the integration of servers, storage devices, network devices, power supply systems, and the necessary piping and connection systems into an overall cabinet by the equipment supplier, and the overall delivery to the client. Integral cabinet delivery enables rapid deployment, reduces on-site installation and configuration effort, and accelerates data center construction and deployment.

All components of the integral cabinet equipment are usually supplied by the same manufacturer to ensure system compatibility and reliability, However, due to the close coupling of the liquid-cooling system with the IT equipment, replacing or upgrading individual components will be more complicated compared with the decoupled delivery method. Cabinet products have high technical requirements in the R&D stage, manufacturers need to have a high degree of integration and design capabilities, the system validation cycle is longer, the R & D cost is larger, different manufacturers are prone to technical barriers.

Decoupled delivery refers to the separate provision and installation of liquid-cooled cabinets and IT equipment, with the liquid-cooled cabinets deployed first at the time of delivery, and then IT loads gradually deployed as the demand grows, which provides greater flexibility for the user side to rack servers in phases, and to choose IT equipment from different vendors to to match a specific liquid-cooled cabinet for customization and optimization based on business needs.

Decoupled delivery can reduce product development costs by forming a unified design standard compared to whole cabinet delivery, lowering the technical capability requirements for manufacturers and reducing supply chain pressure.

After decoupling, the whole system is more white-boxed, which can promote the benign development of the liquid cooling ecosystem and save costs for the user side. Therefore, decoupled delivery helps promote the rapid development of the entire liquid cooling ecosystem in the direction of standardization, economization and scaling.

With the increasing popularity of cold plate cooling technology scale deployment, data center servers are deeply coupled with cabinets. However, the variety of product forms, low degree of standardization, and the absence of a unified interface specification standard have greatly hindered the sustainable development of the liquid cooling industry. How to realize the decoupling of liquid-cooled cabinets and server nodes, and promote the further scale deployment of liquid-cooled technology has become a current industry concern.

The large-scale deployment of cold plate liquid cooling also relies on a high degree of reliability, in which the problem of liquid leakage is a safety hazard that cannot be ignored. To deal with this risk, the industry has studied a number of measures, such as deploying liquid leakage detection systems, designing redundant structures and emergency response mechanisms, and strengthening personnel training and regular maintenance. However, it is even more important to do a good job of sealing the key components of the liquid-cooling system.

Solving possible liquid leakage problems and decoupling between components in cold plate liquid cooling systems is a key point to promote the scale-up of this technology.  

There are a large number of quick connector in the cold plate liquid cooling system, which is also a high incidence of liquid leakage, and standardized research on them is of great significance to ensure the long-term stable operation of the cold plate liquid cooling system.

liquid-cooling-system

QD is a tool-independent connection component that can be repeatedly connected and disconnected. It is fast, easy and safe, and can realize the transmission and on-off of fluids, and is the core component for connecting servers and cabinets.

The decoupling process of cold plate liquid cooling needs to rely on multi-dimensional initiatives such as standardized product definitions of fluid connectors, the construction of a perfect third-party testing system, and the scaled validation of products.

The transmission medium of the fluid connector is liquid or gas, with two-way self-sealing function, no liquid leakage during insertion and disconnection, which plays a vital role in the agile transportation requirements of the liquid cooling system.

Its simple installation and quick operation greatly improve the maintainability of electronic equipment. In cold plate liquid cooling decoupling, the type, material, tolerance and other important specifications of the fluid connector need to be clarified to ensure the safety and reliability of the fluid connector and to prevent equipment damage, business interruption and even safety issues due to coolant leakage.

Based on the mode of operation, liquid connectors can be categorized into two main types: UQD (Universal Quick Disconnect) and QDC(Quick Detachable Connector). As shown in Figures 1 and 2, UQD and QDC are two key liquid-cooling connection technologies, which are mainly used for fast and sealed docking between server liquid-cooling circuits and Manifold (liquid-cooling distributor).

UQD-for-liquid-cooling-system
Figure 1
QDC-for-liquid-cooling-system
Figure 2

As the power density of data centers increases, traditional air-cooled cooling is gradually replaced by liquid cooling technology, and efficient and reliable connections between hot and cold circuits become the core of ensuring cooling efficiency and system stability.

Now, let’s talk about the advantages and disadvantages of these two connection methods, and how to choose.

Compared to UQD, QDC are identical in internal seal design and spool construction, differing only in the locking mechanism and floating structure.

QDC are those that require manual insertion and removal of the male and female connectors during insertion and removal. When manually inserted, the fluid connector is locked by its own locking mechanism, which realizes quick connection and locking of the product and ensures reliable sealing of the product.

QDC usually have a fixed male end and a movable female end, which is used in conjunction with a hose to achieve the purpose of flexible compensation.

ItemUQDQDC
DensityHigh Density Cabinets, Modular Data CentersSmall to medium scale deployments or customized scenarios
AutomationAutomation insertion and removal (e.g., robotic operation and maintenance)Manual operation with high flexibility
Maintenance frequencyLow (less manual intervention)High (requires periodic inspection or replacement)
SpaceSpace constraints and high insertion fault tolerance requirementsAmple operating space with manual adjustment
StandardizationOCP certification, globally specifications, support for cross-vendor operationsNo uniform standards
Assemble efficiencySupports online plugging and unplugging, significantly reducing operation and maintenance timeRequires precise calibration, time-consuming, susceptible to leakage due to mishandling
CostLowHigh
ReliabilityAnti-leakage, anti-pollution design, OCP verificationDependent on the level of operation in the field, stability is affected by human factors

Currently, data centers are gradually transforming to automation and intelligence, and the features of universal quick disconnect (UQD): plug-and-play and do not require precise alignment, not only fit the automated operation and maintenance scenarios of data centers in the future and comply with the development trend of data centers that are highly efficient, intelligent, and intensified, but also save man-made operation space, satisfy the stringent requirements of high-computing-power environments for high-density cabinets, and reduce the complexity of operation and maintenance. The system also saves human operation space, meets the strict requirements of high-density cabinets in high computing power environments, and reduces the complexity of operation and maintenance.

Here is the comprehensive analysis of UQDB from multiple dimensions, including key technical requirements, selection criteria and industry practices.

As a component in direct contact with the coolant, UQD has extremely strict requirements for sealing performance.

On the one hand, the UQD should ensure the connecting seal of the liquid cooling system, that is, when the flow diameter of the coolant through or equivalent through does not match the working flow, it can be designed with a certain degree of redundancy to ensure that the seal will not be washed out, and the impurities in the coolant will not be scratched or attached to the seal causing leakage.

On the other hand, it should also ensure the sealing in the process of plugging and unplugging, that is, UQD should have the ability of “dry disconnection”, which can realize automatic sealing in the process of connection and separation, and the liquid will not leak.

Through the above verification, we can design a load of <0.1kg per 45 sq. ft. to ensure the implementation of tunnel furnace brazing; on the contrary, it is necessary to consider whether the current product is suitable for the tunnel brazing process. The above empirical values for your reference in the selection of the process.

In addition, UQD needs to be inserted and removed frequently during operation and maintenance, so the shell material and coating are required to have strong abrasion resistance, in order to improve the service life of the UQD and reduce the frequency of replacement and maintenance, thus reducing the time and cost of operation and maintenance. Generally speaking, the material of UQD is often made of high-strength materials (such as stainless steel, copper, etc.).

Floatability is an important factor in achieving precise mating of UQD.

In cold plate liquid cooling systems, UQD is locked in place by matching them to the external structure. Due to the inevitable dimensional tolerances of mechanical components during the manufacturing process, the accumulation of dimensional tolerances during the docking process can easily lead to difficulties in achieving the exact position of the rack in the cabinet, and UQD may not be able to align the connection correctly.

Therefore, UQD often require additional guides or floats between the male and female ends for alignment and error compensation to ensure that the quick connectors between server racks and cabinets can be smoothly plugged through!

The flow capacity of the UQDB affects the overall cooling efficiency of the liquid cooling system.  Flow capacity is defined as the maximum flow rate of the medium through the connector when the pressure difference between two connectors of the cold plate is fixed, which can be expressed by the flow coefficient

The larger the flow coefficient, the stronger the flow capacity of the UQD, and accordingly, the media flow through the connector to overcome the local resistance is also smaller. Existing cold plate liquid cooling systems are highly coupled servers with specific cabinets, coolant, piping and other components, and the total flow resistance on the secondary side needs to be adapted to the pump head. So the flow resistance optimization of OQD helps to reduce the local resistance of the system, reducing resistance for the pump to overcome, which in turn improves the system’s cooling efficiency. There are many ways to improve the flow capacity of UQD, including internal structure design optimization, material optimization and so on.

UQD design usually also requires consideration of mounting and misalignment tolerances and the design of reliable UQD mating mechanisms (e.g. guides).

UQD-for-liquid-cooling-plate

Radial floating: Manufacturers adopt different solutions, such as Huawei CQDB realizes ±2.5mm compensation by floating module, while Stäubli CGD relies on internal structure design to realize ±1mm compensation.

Axial floating: UQDB only supports 0~+1mm axial tolerance, and some users need to install an additional floating module to make up for the error.

International Standard: The UQDB standard published by OCP (Open Compute Project) in 2020 defines a common interface specification and is supported by Google, Microsoft, IBM and others. The standard specifies interface dimensions, float tolerances (±1 mm radial float), and sealing requirements, but does not include internal structural details, so there are variations between vendors’ products.

UQD&UQDB-for-liquid-cooling-plate

In data center liquid cooling system, connectors are the core components to ensure efficient and safe operation of the system. Selection needs to focus on sealing, pressure resistance, flow matching and ease of operation and maintenance, as well as combining the differences in cold plate/submerged scenarios to choose the right solution. With the popularity of liquid cooling technology, connectors are developing in the direction of intelligence and standardization, becoming the key support for data centers to achieve low-carbon, high-density transformation.

Intelligent connectors embedded with IoT chips to realize self-diagnosis of connection status (e.g. plugging and unplugging counts, seal wear warning) and linkage with BMS system.

Promote the unification of interface size (e.g. DN8/DN12) and electrical signals, and support plug-and-play (e.g. OCP liquid-cooling standard).

Adopting lightweight materials (e.g. carbon fiber reinforced plastic) to reduce connector weight (by 30%), helping data center PUE value to approach 1.05.

Boost your business with our high quality services
Please fill out below form, we will contact you within 1 working day, please pay attention to the email with the suffix @ptheatsink.com.

Or send email to mia@ptheaksink.com directly