0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2011;():i. doi:10.1115/IPACK2011-NS2.
FREE TO VIEW

This online compilation of papers from the ASME 2011 Pacific Rim Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Systems (InterPACK2011) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in the ASME Digital Library and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

Thermal Management

2011;():1-6. doi:10.1115/IPACK2011-52011.

Modeling of fan failures in networking chassis is a challenging task. There is not enough data or literature available to accurately model fan failures. This paper embarks on a study consisting of both modeling and experimental cases to investigate how to accurately model fan failures. The study will include CFD simulations in different ways to model fan failures and also real life experimental measurements to verify the simulation concepts. Recommendations will then be made about the exact and accurate ways of modeling fan failures. Normally the fans have been modeled as a two dimensional entity. The fan curve measured by the vendor is used in the fan during modeling. The problem that arises with this kind of a fan modeling especially during fan failures is that the three dimensional effect of the rotor and stator blades of the fan is not taken into account. In reality, the fan blades provide a big obstruction to the flow reversal that happens due to pressure imbalance during fan failures. In this paper, we start with modeling a single fan in an AMCA wind tunnel. The complete rotor and stator geometry of the fan is modeled. We run a MRF (Multiple Reference Frame) model to generate the fan curve for the fan and compare it with the experimental fan curve. After we validate the fan curve in an AMCA model for a single fan, the paper discusses three different sets of temperature and flow data: i. Temperature and flow data in a real system with four fans modeled with two dimensional fans. ii. Temperature and flow data in a real system with four fans modeled with MRF fans (full 3 dimensional rotor and stator blade geometry). iii. Experimental comparisons with the simulated data Conclusions will be drawn based on this modeling and experimental data about accurate ways of modeling fans during fan failures in real systems.

Topics: Modeling , Failure
Commentary by Dr. Valentin Fuster
2011;():7-11. doi:10.1115/IPACK2011-52014.

Digital signage systems are large format displays that are typically installed in public areas for advertisement and informative publications. This emerging technology is considered as a major category in the large format display market. In general, a digital signage system consists of a flat panel display consisting of high brightness screen and operation circuits. Also, special features of high performance embedded computing system exist in very small form factors. Such products, however, are accompanied with high heat dissipation of the internal components and are usually exposed to very harsh environments for more frequent exposure to customers. Also the installation schemes of the products vary for different objectives, and a robust thermal design is required to guarantee the system reliability considering corner scenarios within the design space. The objective of the present study is to investigate the effect of installation environment on the thermal performance of a display assembly resembling a digital signage system. Design criteria for a proper thermal management scheme are proposed. The thermal characteristics of a digital signage system are presented in various operation conditions and each thermal design parameter is discussed thoroughly to ensure the reliability requirements of the digital signage system are met.

Commentary by Dr. Valentin Fuster
2011;():13-17. doi:10.1115/IPACK2011-52015.

The power density requirements continue to increase and the ability of thermal interface materials has not kept pace. Increasing effective thermal conductivity and reducing bondline thickness reduce thermal resistance. High thermal conductivity materials, such as solders, have been used as thermal interface materials. However, there is a limit to minimum bondline thickness in reducing resistance due to increased fatigue stress. A compliant thermal interface material is proposed that allows for thin solder bondlines using a compliant structure within the bondline to achieve thermal resistance <0.01 cm2 C/W. The structure uses an array of nanosprings sandwiched between two plates of materials to match thermal expansion of their respective interface materials (ex. silicon and copper). Thin solder bondlines between these mating surfaces and high thermal conductivity of the nanospring layer results in thermal resistance of 0.01 cm2 C/W. The compliance of the nanospring layer is two orders of magnitude more compliant than the solder layers so thermal stresses are carried by the nanosprings rather than the solder layers. The fabrication process and performance testing performed on the material is presented.

Commentary by Dr. Valentin Fuster
2011;():19-27. doi:10.1115/IPACK2011-52017.

Loop heat pipe (LHP) is a highly efficient cooling device. It has gained great attention in the electronics cooling industry due to its superior heat transport capability — that is, its ability to carry heat over long distances. For this article, a miniature flat loop heat pipe (MFLHP) with rectangular-shaped evaporator was developed. The LHP’s evaporator was combined with the compensation chamber. MFLHPs with different diameters and lengths for the connecting pipeline were selected for a series of experimental studies on their heat transfer characteristics. In these experiments, pure water was used as the working fluid. The studies showed that the heat transport capability of a MFLHP with 4 mm diameter was better than that a MFLHP with 3 mm diameter. At a low thermal resistance of 0.04°C /W (at 200W), an optimal length for the connecting pipeline for a particular MFLHP with 4 mm diameter was identified. Finally, a heat sink attached to a MFLHP was developed for cooling a graphics processing unit (GPU), the thermal design power (TDP) of which was 200 W. The results showed the GPU heat sink with MFLHP had good performance and satisfied GPU cooling requirements. Compared to the conventional heat pipe solutions, only one MFLHP was able to cope with high power dissipation, offering the potential to make a lighter heat sink.

Commentary by Dr. Valentin Fuster
2011;():29-35. doi:10.1115/IPACK2011-52018.

Heat transfer characteristics of an aluminum plate pulsating heat pipe (PHPs) were investigated experimentally. Sizes, consisting of parallel and square channels as well as different cross-sections and different number of turns were considered. Acetone was used as working fluid. The characterization had been done for various heating mode orientations, cooling conditions, and internal structures via flow visualization and thermal performance tests. The flow visualization showed that the aluminum plate PHPs can maintain the heat transfer characteristics of the liquid and the vapor slug as well as the conventional tubular PHPs. The trend of flow pattern changed from the intermittent oscillation to unidirectional circulation. It was also observed that the PHPs’ thermal performance improved as heating power increased. The gravity greatly influenced the thermal performance of plate PHPs. Increasing the cooling temperature decreased the thermal resistance of the plate PHPs. Increasing the number of turns and the area of channel cross-section improved the heat transport capability of plate PHPs for some specific scenarios. A heat sink with a plate PHP was developed for comparing with the pure metal and conventional heat pipe solutions. The result showed that the plate PHPs solution performed well, and had the potential to replace previous solutions in some cases.

Commentary by Dr. Valentin Fuster
2011;():37-45. doi:10.1115/IPACK2011-52019.

Robust precision temperature control of photonics components is achieved by mounting them on thermoelectric modules (TEMs) which are in turn mounted on heat sinks. However, the power consumption of TEMs is high because high currents are driven through Bi2 Te3 -based semiconducting materials with high electrical resistivity and finite thermal conductivity. This problem is exacerbated when the ambient temperature surrounding a TEM varies in the usual configuration where the air-cooled heat sink a TEM is mounted to is of specified thermal resistance. Indeed, heat sinks of negligible and relatively high thermal resistances minimize TEM power consumption for sufficiently high and low ambient temperatures, respectively. Optimized TEM-heat sink assemblies reduce the severity of this problem. In the problem considered, total footprint of thermoelectric material in a TEM, thermoelectric material properties, heat load, component operating temperature, relevant component-side thermal resistances and ambient temperature range are prescribed. Provided is an algorithm to compute the unique combination of the height of the pellets in a TEM and the thermal resistance of the heat sink attached to it which minimizes the maximum power consumption of the TEM over the specified ambient temperature range. This optimization maximizes the fraction of the power budget in an optoelectronics circuit pack available for other uses. Implementation of the algorithm is demonstrated through an example for a typical set of conditions.

Commentary by Dr. Valentin Fuster
2011;():47-57. doi:10.1115/IPACK2011-52020.

Plate fin heat sinks (PFHS) are widely used to remove heat from the microelectronic devices. In the present study, a new type of compound heat sink, named as plate-pin fin heat sink (PPFHS), is employed to improve the air cooling performance. With CFD numerical method, PPFHSs with five forms of pin cross-section profiles (square, circular, elliptic, NACA 0050, and dropform) and PFHS were simulated. Two different length scales were adopted to evaluate the performance of six types of heat sinks, including PFHS. One of the length scales is commonly used by many investigators, which is two times of the channel spacing. The other length scale is suggested by volume averaging theory (VAT), which is four times of average porosity divided by specific interface. The influence of pin fin cross-section profile on the flow and heat transfer characteristics was presented by means of Nusselt number, pressure drop and overall efficiency. It is found that the Nu number of a PPFHS is at least 35% higher than that of a PFHS used to construct the PPFHS at the same Reynolds number no matter which length scale was used. It is also revealed that the heat transfer enhancement of square PPFHS is offset by its excessively high pressure drop, which makes it not as efficient as the other types of PPFHS. Circular PPFHS performs similar to the streamline shaped PPFHS when the Reynolds number is not too high. However, with the increase in Re the advantage of the circular cross-section diminishes. Using the streamline shaped pins, not only the pressure drop of the compound heat sinks could be decreased considerably, the heat transfer enhancement also makes a step forward. However, evaluating the performance of heat sinks by using the commonly used length scale, the benefit of streamline shaped types of PPFHSs is a little bit overstated. The VAT suggested length scale is more reasonable to do the performance comparison of different heat sinks, especially when it is difficult to provide a fair and physically meaningful basis for the comparison. In short, the present numerical simulation provides original information of the influence of different pin-fin cross-section profiles on the thermal and hydraulic performance of the new type compound heat sink and emphasizes the importance of choosing a proper length scale when evaluating heat transfer enhancement, which is helpful in the design of heat sinks.

Topics: Cooling , Heat sinks
Commentary by Dr. Valentin Fuster
2011;():59-67. doi:10.1115/IPACK2011-52023.

This article is focused on experimental investigation of the single-phase thermal-fluid performance of a manifold microchannel cold plate with integral hierarchical branching channels. The use of a multiphysics topology optimization technique for the development of the studied microchannel structure is briefly reviewed. An experimental test setup is then described followed by measured temperature and pressure results. Specifically, unit thermal resistance and pressure drop values for the hierarchical microchannel jet impingement cold plate are compared with corresponding results for the jet impingement of a flat plate. These experiments confirm that the hierarchical microchannel system provides increased heat transfer with a negligible change in pressure drop when compared with the standard flat plate system.

Commentary by Dr. Valentin Fuster
2011;():69-80. doi:10.1115/IPACK2011-52039.

Passive phase-change thermal spreaders such as vapor chambers have been widely employed to spread the heat from small-scale high-flux heat sources to larger areas. In the present work, a numerical model for ultra-thin vapor chambers has been developed which is suitable for reliable predictions of the operation at high heat fluxes and small scales. The effects of boiling in the wick structure on the thermal performance are modeled and the model predictions are compared with experiments on custom-fabricated devices. The model predictions agree reasonably well with experimental measurements and reveal the input parameters to which thermal resistance and vapor chamber capillary limit are most sensitive.

Commentary by Dr. Valentin Fuster
2011;():81-94. doi:10.1115/IPACK2011-52042.

Submerged jet impingement boiling has the potential to enhance pool boiling heat transfer rates. In most practical situations, the surface could consist of multiple heat sources that dissipate heat at different rates resulting in a surface heat flux that is non-uniform. This paper discusses the effect of submerged jet impingement on the wall temperature characteristics and heat transfer for a non-uniform heat flux. A mini-jet is caused to impinge on a polished silicon surface from a nozzle having an inner diameter of 1.16 mm. A 25.4 mm diameter thin-film circular serpentine heater, deposited on the bottom of the silicon wafer, is used to heat the surface. Deionized degassed water is used as the working fluid and the jet and pool are subcooled by 20°C. Voltage drop between sensors leads drawn from the serpentine heater are used to identify boiling events. Heater surface temperatures are determined using infrared thermography. High-speed movies of the boiling front are recorded and used to interpret the surface temperature contours. Local heat transfer coefficients indicate significant enhancement upto radial locations of 2.6 jet diameters for a Reynolds number of 2580 and upto 6 jet diameters for a Reynolds number of 5161.

Topics: Polishing , Boiling , Silicon
Commentary by Dr. Valentin Fuster
2011;():95-106. doi:10.1115/IPACK2011-52043.

Experimental data for critical heat flux (CHF) during submerged jet impingement boiling of saturated water at sub-atmospheric conditions is presented. Experiments are performed at three sub-atmospheric pressures of 0.176 bar, 0.276 bar, and 0.477 bar with corresponding fluid saturation temperatures of about 57.3 °C, 67.2 °C, and 80.2 °C. Jet exit Reynolds numbers ranging from 0 to 14,000 are considered for two different heater surface finishes at a fixed nozzle to surface spacing of six nozzle diameters. CHF correlations from literature on jet impingement boiling are compared against the experimental data and found to poorly predict CHF under the conditions considered. A CHF correlation that captures the entire experimental data set within an average error of ±3 percent and a maximum error of ±13 percent is developed to serve as a predictive tool for the range of conditions examined.

Commentary by Dr. Valentin Fuster
2011;():107-111. doi:10.1115/IPACK2011-52045.

A technical perspective of hybrid cooling methodology, comprised of pumped water cooling with incorporated convection air cooling, for densely packaged high performance computing systems is presented in this paper. The technology has been implemented in the Fujitsu high-end server GS8900 (released in 1999), and proposed with technical innovations for future large-scale computing systems. Design strategies of cooling systems, including water cooling modules, water flow and distribution units as well as air convection architectures, are reviewed, together with advantages and performance of the technology analyzed. Low temperature chilled water is utilized in these systems, where the system board is assembled with a liquid cooling unit on each one for cooling of major processor and supporting chips. Memory modules, other devices and components on the system board, as well as power supply units in the rack, are cooled by ambient air convection. Besides of the high cooling capability, attention is given especially to advantages of the technology with tremendously lowered power consumption, enhanced system reliability and increased packaging density, improvements in cooling efficiency and easy of serviceability are emphasized as well.

Topics: Cooling
Commentary by Dr. Valentin Fuster
2011;():113-121. doi:10.1115/IPACK2011-52053.

Synthetic jets are generated by an equivalent inflow and outflow of fluid into a system. Even though such a jet creates no net mass flux, net positive momentum can be produced because the outflow momentum during the first half of the cycle is contained primarily in a vigorous vortex pair created at the orifice edges whereas in the backstroke, the backflow momentum is weaker, despite the fact that mass is conserved. As a consequence of this, the approach can be potentially utilized for the impingement of a cooling fluid over a heated surface. In the present study, a canonical geometry is presented, in order to study the flow and heat transfer of a purely oscillatory jet that is not influenced by the manner in which it is produced. The unsteady Navier-Stokes equations and the convection-diffusion equation were solved using a fully unsteady, two-dimensional finite volume approach in order to capture the complex time dependent flow field. A detailed analysis was performed on the correlation between the complex velocity field and the observed wall heat transfer. A fundamental frequency, in addition to the jet forcing frequency, was found, and was attributed to the coalescence of consecutive vortex pairs. In some instances, this vortex pairing can lead to zones of low heat transfer. Two point correlations showed that the Nusselt number Nu, showed stronger correlation with the vertical velocity v although the spatial-temporal dependencies are not yet fully understood. It was found that the Reynolds number and the Strouhal number, are sufficient to successfully scale the problem at larger dimensions and this is presently being exploited in order to design validation experiments using jets large enough to allow careful local measurements.

Commentary by Dr. Valentin Fuster
2011;():123-131. doi:10.1115/IPACK2011-52056.

This paper describes a novel concept of lateral motion of bubbles in pool boiling, which has the potential to be translated into a liquid propulsion system when used in a closed loop. The lateral motion of bubbles is achieved due to nucleation from cavities on an asymmetric saw-tooth profile created on a silicon surface. The surface modification involves etching a 3D sawtooth structure with a nominal angle of approximately 24° using gray-scale lithography. The downstream slope of each sawtooth structure features re-entrant cavity structures that act as controlled nucleation sites. The angle of the surface thus obtained causes a net imbalance of forces acting in concert on the flow field around the bubbles departing from the surface. The first part of the paper discusses the steps involved in fabricating such a heat sink with a saw-tooth structure augmented by re-entrant cavities. This is followed by description of the experimental facility used for studying the feasibility of the concept. High-speed photography in conjunction with bubble tracking is used to determine the bubble velocities. Results for a subcooled condition show substantial axial bubble velocities on the order of up to 68.5 cm/s near the cavities and a far-field velocity of up to 4 cm/s.

Topics: Motion , Bubbles , Cavities
Commentary by Dr. Valentin Fuster
2011;():133-142. doi:10.1115/IPACK2011-52079.

Army programs have focused on increasing the use of power-dense electronic components to improve system weight, fuel usage, design flexibility, and overall functionality, thus, stressing the thermal management requirements. Recent cooling designs focused on flowing 80–100 °C engine coolant through single-phase microchannel cold plates but concern over pumping power, heat dissipation, cold plate temperature inconsistency, and contaminate clogging have prompted interest in two-phase flow in a minichannel cold plate. In the course of this study, both single- and two-phase experiments were conducted with a 6.8 × 2.7 × 0.9 cm offset fin minichannel cold plate using 25 °C, 80 °C, and 99 °C de-mineralized water, respectively, with flowrates ranging from 0.33 cm3 /s to 45 cm3 /s. Heat dissipation using solder attached chip resistors was incrementally increased from 0 W to more than 1000 W while simultaneously measuring cold plate pressure drop, chip surface temperature, inlet and outlet fluid temperature, and flowrate. Preliminary results indicate that utilizing a minichannel cold plate with two-phase heat transfer offers the ability to significantly reduce clogging potential, flowrate, and associated pumping power, while improving thermal resistivity by more than a factor of 4 and temperature consistency by greater than a factor of 10. Single- and two-phase correlations were used to compare performance with theoretical values.

Topics: Vehicles , Army , Electronics
Commentary by Dr. Valentin Fuster
2011;():143-151. doi:10.1115/IPACK2011-52089.

Experiments were conducted to determine the influence of local vapor quality on local heat transfer coefficient in flow boiling in an array of microchannels. Additionally, the variation of local heat transfer coefficient along the length and width of the microchannel heat sink for given operating conditions was investigated over a range of flow parameters. Each test piece includes a silicon parallel microchannel heat sink with 25 integrated heaters and 25 temperature sensors arranged in a 5×5 grid, allowing for uniform heat dissipation and local temperature measurements. Channel dimensions ranged from 100 μm to 400 μm in depth and 100 μm to 5850 μm in width; the working fluid for all cases was the perfluorinated dielectric liquid, FC-77. The heat transfer coefficient is found to increase with increasing vapor quality, reach a peak, and then decrease rapidly due to partial dryout on the channel walls. The vapor quality at which the peak is observed shows a strong dependence on mass flux, occurring at lower vapor qualities with increasing mass flux for fixed channel dimensions. Variations in local heat transfer coefficient across the test piece were examined both along the flow direction and in a direction transverse to it; observed trends included variations due to entrance region effects, two-phase transition, non-uniform flow distribution, and channel wall dryout.

Commentary by Dr. Valentin Fuster
2011;():153-163. doi:10.1115/IPACK2011-52094.

Transition from single core to multicore technology has brought daunting challenge for thermal management of microprocessor chips. The issue of power dissipation in next generation chip will be far more critical as further transition from multicore to many-core processors is soon to be expected. It is very important to obtain uniform on-chip thermal profile with low peak temperature for improved performance and reliability of many-core processors. In this paper, a proactive thermal management technique called ‘power multiplexing’ is explored for many-core processors. Power multiplexing involves redistribution of locations of power dissipating cores at regular time intervals to obtain uniform thermal profile with low peak temperature. Three different migration policies namely random, cyclic and global coolest replace have been employed for power multiplexing and their efficacy in reducing the peak temperature and thermal gradient on chip is investigated. A comparative study of these policies has been performed enlisting their limits and advantages from the thermal and implementation perspective considering important relevant parameters such as migration frequency. For a given migration frequency, global coolest replace policy is found to be the most effective among the three policies considered as this policy leads to 10 °C reduction in peak temperature and 20 °C reduction in maximum spatial temperature difference on a 256 core chip. Proximity of active cores or power configuration on chip is characterized by a parameter ‘proximity index’ which emerges as an important parameter to represent the spatial power distribution on a chip. Global coolest replace policy optimizes the power map on chip taking care of not only the proximity of active cores but also the finite-size effect of chip and the 3D system of electronic package leading to almost uniform thermal profile on chip with lower average temperature.

Commentary by Dr. Valentin Fuster
2011;():165-172. doi:10.1115/IPACK2011-52095.

In this paper is presented the results on performance of the cooling model using Loop Heat Pipe (LHP) system. In recent years, ever-ending demand of high performance CPU led to a rapid increase in the amount of heat dissipation. Consequently, thermal designing of electronic devices need to consider some suitable approach to achieve high cooling performance in limited space. Heat Pipe concept is expected to serve as an effective cooling system for laptop PC, however, it suffered from some problems as follows. The heat transport capability of conventional Heat Pipe decreases with the reduction in its diameter or increase in its length. Therefore, in order to use it as cooling system for future electronic devices, the above-mentioned limitations need to be removed. Because of the operating principle, the LHP system is capable of transferring larger amount of heat than conventional heat pipes. However, most of the LHP systems suffered from some problems like the necessity of installing check valves and reservoirs to avoid occurrence of counter flow. Therefore, we developed a simple LHP system to install it on electronic devices. Under the present experimental condition (the working fluid was water), by keeping the inside diameter of liquid and vapor line equal to 2mm, and the distance between evaporator and condenser equal to 200mm, it was possible to transport more than 85W of thermal energy. The thickness of evaporator was about 5mm although it included a structure to serve the purpose of controlling vapor flow direction inside it. Successful operation of this system at inclined position and its restart capability are confirmed experimentally. In order to make the internal water location visible, the present LHP system is reconstructed using transparent material. In addition, to estimate the limit of heat transport capability of the present LHP system using this thin evaporator, the air cooling system is replaced by liquid cooling one for condensing device. Then this transparent LHP system could transport more than 100W of thermal energy. However, the growth of bubbles in the reserve area with the increase in heat load observed experimentally led to an understanding that in order to achieve stable operation of the LHP system under high heat load condition, it is very much essential to keep enough water in the reserve area and avoid blocking the inlet with bubbles formation.

Commentary by Dr. Valentin Fuster
2011;():173-178. doi:10.1115/IPACK2011-52097.

The present study deals with the transient thermal management of electro-optical equipment using the phase-change materials (PCMs). These materials can absorb large amounts of heat without significant rise of their temperature during the melting process. This effect is attractive for using in the passive thermal management of portable electro-optical systems, particularly those where the device is intended to operate in the periodic regime, or where the relatively short stages of high power dissipation are followed by long stand-by periods without a considerable power release. In the present work, a so-called hybrid heat sink is developed. The heat sink is made of aluminum. The heat is dissipated on the heat sink base, and then is transferred by thermal conduction to the PCM and to a standard forced-convection air heat sink cooled by an attached fan. The whole system may be initially at some constant temperature which is below the PCM melting temperature. Then, power dissipation on the heat sink base is turned on. As heat propagates within the heat sink, some part of it is absorbed by the PCM causing a delay in the temperature growth at the heat sink base. Alternatively, the steady-state conditions may be such that the base temperature is below the PCM melting temperature, meaning that all the heat generated on the heat sink base is transferred to the cooling air. Then, the fan is turned off reducing the heat transfer to the ambient air, and the heat is absorbed into the PCM resulting in its melting. In both cases, the time that it will take the heat sink base to approach some specified maximum allowed temperature is expected to be longer than that without the PCM.

Topics: Heat sinks
Commentary by Dr. Valentin Fuster
2011;():179-184. doi:10.1115/IPACK2011-52099.

A heat spreader is one of the solutions for thermal management of electronic and photonic systems. By placing the heat spreader between a small heat source and a large heat sink, the heat flux is spread from the former to the latter, resulting in a lower thermal spreading resistance between them. There are many types of heat spreaders known today having different heat transfer modes, shapes and sizes. This paper describes the theoretical study to present the fundamental data for the rational use and thermal design of heat spreaders. Two-dimensional disk-shaped mathematical model of the heat spreader is constructed, and the dimensionless numerical analysis is performed to investigate the thermal spreading characteristics of the heat spreaders. From the numerical results, the temperature distribution and the heat flow inside the heat spreaders are visualized, and then the effects of design parameters are clarified. The discussion is also made on the discharge characteristics of the heat spreaders. Moreover, a simple equation is proposed to evaluate the heat spreaders.

Topics: Flat heat pipes
Commentary by Dr. Valentin Fuster
2011;():185-191. doi:10.1115/IPACK2011-52106.

This paper describes a measurement method for the in-plane thermal conductivity of Printed Circuit Boards (PCBs). We designed two types of PCBs with several wiring patterns on their surfaces. This means copper amount on the PCBs is different. We measured their effective thermal conductivity in thickness direction to investigate the effects of the wiring patterns on the in-plane thermal conductivity of the PCBs. One is normal PCBs and the other is about 18 times larger PCBs than the normal PCBs. The experimental results showed that the thermal conductivity of normal PCBs was not dependent on the wiring patterns. On the other hand, the thermal conductivity of larger PCBs increased with increasing amount of copper wire due to the heat diffusion in in-plane direction by copper wires. We concluded that the effect of the wiring patterns on the in-plane thermal conductivity can be observed with our measurement method. We also performed Computational Fluid Dynamics (CFD) analyses and clarified the correlation between amount of copper wire and in-plane thermal conductivity of the PCBs.

Commentary by Dr. Valentin Fuster
2011;():193-202. doi:10.1115/IPACK2011-52116.

Electrical impedance of a two-phase mixture is a function of void fraction and phase distribution. The difference in the electrical conductance and permittivity of the two phases can be exploited to measure electrical impedance for obtaining void fraction and flow regime characteristics. An electrical impedance meter is constructed for the measurement of void fraction in microchannel two-phase flow. The experiments are conducted in air-water two-phase flow under adiabatic conditions. A transparent acrylic test section of hydraulic diameter 780 micrometer is used in the experimental investigation. The impedance void meter is calibrated against the void fraction measured using analysis of images obtained with a high-speed camera. Based on these measurements, a methodology utilizing the statistical characteristics of the void fraction signals is employed for identification of microchannel flow regimes.

Commentary by Dr. Valentin Fuster
2011;():203-212. doi:10.1115/IPACK2011-52121.

Improved performance of semiconductor devices in recent years has resulted in consequent increase in power dissipation. Hence thermal characterization of components becomes important from an overall thermal design perspective of the system. This study looks at a high performance non-isolated point of load power module (a DC to DC converter) meant for advanced computing and server applications. Thermal characteristics of the module were experimentally analyzed by placing the power module on a bare test board (with no insulation) inside a wind tunnel with thermocouples attached to it. There were three devices on this module that dissipate power. There were two FETs (Field Effect Transistors) and an inductor which can be considered as sources. The consolidated power dissipation from the module was calculated by measuring the input voltage and input current while keeping the output voltage and current constant. Temperatures at various points on the module and the test card were recorded for different air flow velocities and overall power dissipation. Subsequently this set up was numerically analyzed using a commercially available computational fluid dynamics (CFD) code with the objective of comparing the results with experimental data previously obtained.

Commentary by Dr. Valentin Fuster
2011;():213-220. doi:10.1115/IPACK2011-52122.

Spreading of high-flux electronics heat is a critical part of any packaging design. This need is particularly profound in advanced devices where the dissipated heat fluxes have been driven well over 100W/cm2 . To address this challenge, researchers at Raytheon, Thermacore and Purdue are engaged in the development and characterization of a low resistance, coefficient of thermal expansion (CTE)-matched multi-chip vapor chamber heat spreader, which utilizes capillary driven two-phase heat transport. The vapor chamber technology under development overcomes the limitations of state-of-the-art approaches by combining scaled-down sintered Cu powder and nanostructured materials in the vapor chamber wick to achieve low thermal resistance. Cu-coated vertically aligned carbon nanotubes is the nanostructure of choice in this development. Unique design and construction techniques are employed to achieve CTE-matching with a variety of device and packaging materials in a low-profile form-factor. This paper describes the materials, design, construction and characterization of these vapor chambers. Results from experiments conducted using a unique high-heat flux capable 1DSS test facility are presented, exploring the effects of various microscopic wick configurations, CNT-functionalizations and fluid charges on thermal performance. The impacts of evaporator wick patterning, CNT evaporator functionalization and CNT condenser functionalization on performance are assessed and compared to monolithic Cu wick configurations. Thermal performance is explained as a function of applied heat flux and temperature through the identification of dominant component thermal resistances and heat transfer mechanisms. Finally, thermal performance results are compared to an equivalent solid conductor heat spreader, demonstrating a >40% reduction in thermal resistance. These results indicate great promise for the use of such novel vapor chamber technology in thickness-constrained high heat flux device packaging applications.

Commentary by Dr. Valentin Fuster
2011;():221-229. doi:10.1115/IPACK2011-52130.

Back in 2008 IBM reintroduced water cooling technology into its high performance computing platform, the Power 575 Supercomputing node/system. Water cooled cold plates were used to cool the processor modules which represented about half of the total system (rack) heat load. An air-to-liquid heat exchanger was also mounted in the rear door of the rack to remove a significant fraction of the other half of the rack heat load; the heat load to air. The next generation of this platform, the Power 775 Supercomputing node/system, is a monumental leap forward in computing performance and energy efficiency. The compute node and system were designed from the start with water cooling in mind. The result, a system with greater than 96% of it’s heat load conducted directly to water; a system that, together with a rear door heat exchanger, removes 100% of it’s heat load to water with no requirement for room air conditioning. In addition to the processor, memory, power conversion, and I/O electronics conduct their heat to water. Included within the framework of the system is a disk storage unit (disc enclosure) containing an interboard air-to-water heat exchanger. This paper will overview the water cooling system featuring the water conditioning unit and rack manifolds. Advances in technology over this system’s predecessor will be highlighted. An overview of the cooling assemblies within the server drawer (i.e. central electronics complex,) the disc enclosure, and the centralized (Bulk) power conversion system will also be given. Further, techniques to enhance performance and energy efficiency will also be described.

Commentary by Dr. Valentin Fuster
2011;():231-240. doi:10.1115/IPACK2011-52148.

The next generation of Thermal Interface Materials (TIMs) are currently being developed to meet the increasing demands of high-powered semiconductor devices. In particular, a variety of nanostructured materials, such as carbon nanotubes (CNTs), are interesting due to their ability to provide low resistance heat transport from device to spreader and compliance between materials with dissimilar coefficients of thermal expansion (CTEs). As a result, nano-Thermal Interface Materials (nTIMs) have been conceived and studied in recent years, but few application-ready configurations have been produced and tested. Over the past year, we have undertaken major efforts to develop functional nTIMs based on short, vertically-aligned CNTs grown on both sides of a thin interposer foil and interfaced with substrate materials via metallic bonding. A high-precision 1-D steady-state test facility has been utilized to measure the performance of nTIM samples, and more importantly, to correlate performance to the controllable parameters. Nearly 200 samples have been tested utilizing myriad permutations of such parameters, contributing to a deeper understanding and optimization of CNT growth characteristics and application processing conditions. In addition, we have catalogued thermal resistance results from a variety of commercially-available, high-performance thermal pads and greases. In this paper, we describe our material structures and the parameters that have been investigated in their design. We report these nTIM thermal performance results, which include a best to-date thermal interface resistance measurement of 3.5 mm2 -K/W, independent of applied pressure. This value is significantly better than all commercial materials we tested and compares favorably with the best results reported for CNT-based nTIMs in an application-representative setting.

Commentary by Dr. Valentin Fuster
2011;():241-249. doi:10.1115/IPACK2011-52161.

In current and next-generation semiconductor electronic devices, sub-continuum heat transfer effects and non-uniform power distribution across the die surface lead to large temperature gradients and localized hot spots on the die. These hot spots can adversely affect device performance and reliability. In this work, we propose an enhanced method for thermal map prediction that considers sub-continuum thermal transport effects and show their impact in floor plan optimization. Sub-continuum effects are expressed in terms of an effective thermal conductivity. We introduce and calibrate a 2D thermal model of the die for fast simulation of thermal effects under non-uniform power generation scenarios. The calibrated 2D model is then used to study the impact of the effective thermal conductivity on the thermal map prediction and floor plan optimization. Results show that sub-continuum effects radically change both the predicted thermal performance and the optimal floor plan configurations.

Commentary by Dr. Valentin Fuster
2011;():251-258. doi:10.1115/IPACK2011-52168.

Thermal vias are widely used to reduce thermal resistance of a printed circuit board (PCB). However, fine via structure becomes an obstacle to computational fluid dynamics (CFD) simulation because fine structure requires a huge number of meshes. Therefore, an efficient modeling method of thermal via structure is needed to reduce computational time. In this paper, an effect of thermal vias on reduction of thermal resistance was experimentally and numerically investigated to gather fundamental data for thermal management of electronics. We used printed circuit board models with some kind of arrangements of thermal vias. Board materials and copper dissipating pad patterns were explored as experimental parameters. Copper pipes (unfilled vias) or rods (filled vias), the diameter of which was 1.5, 3.0 and 5.0 mm, were used as thermal via. Three materials (Glass epoxy, Stainless, and Polycarbonate), thermal conductivity of which were different, were used as board materials. The experimental results showed that area of heat dissipating copper pad patterns and board materials have strong effect on the temperature rise of the heat source. On the other hand, the number of thermal vias and via shapes have no effect on the heat source temperature. Then we performed thermal network analysis to evaluate the experimental results. From the results of the thermal network analysis, it was confirmed that an effect of thermal via is saturated at certain ratio of via area.

Commentary by Dr. Valentin Fuster
2011;():259-267. doi:10.1115/IPACK2011-52172.

Forced convection air-cooled electronic systems utilize fans to sustain air flow through the enclosure. These fans are typically axial flow fans, radial impellers, and centrifugal blowers. When computing flow fields in electronic enclosures, axial fans have traditionally been abstracted as lumped fan models which may or may not be able to capture the necessary details. Under certain conditions, such lumped models may also capture some flow characteristics in the case of impellers and centrifugal blowers. These lumped models comprise a significantly simplified fan geometry, i.e. usually a planar (2-D) rectangular or circular surface with/without an inner (hub) concentric no-flow region for an axial fan or a rectangular prism/cylinder with a planar inlet for blowers/impellers, and a “pressure head-flow rate” (P-Q) curve, which may be supplied by the fan vendor or experimentally derived by the thermal designer. Irrespective of the source, the P-Q curve is obtained from laboratory experiments that conform to the test codes published by societies such as ASME and AMCA. Convenience and accuracy of lumped fan models are dependent on the specific application, cooling method and also the acceptable error margin. The acceptable error margin of the thermal design has shrunk significantly in the last decade. This has caused an interest in more accurate and robust fan modeling techniques such as Multiple Reference Frame (MRF) model which has already been commonly and successfully used in many different industries for a while. In this paper, an attempt was made to provide a validation of the MRF fan modeling applied to different types of fans. The computational fluid dynamics (CFD) model of an AMCA standard wind tunnel was used for each of the fans investigated. The P-Q curve obtained from the MRF model is benchmarked against the corresponding experimentally derived P-Q curve. Benefits and limitations of the MRF model are also discussed.

Commentary by Dr. Valentin Fuster
2011;():269-276. doi:10.1115/IPACK2011-52174.

Passive heat transfer from enclosures with rectangular fins is studied both experimentally and theoretically. Several sample enclosures with various lengths are prepared and tested. To calibrate the thermal measurements and the analyses, enclosures without fins (“bare” enclosures) are also prepared and tested. Surface temperature distribution is determined for various enclosure lengths and heat generation rates. Existing relationships for natural convection and radiation heat transfer are used to calculate the heat transfer rate of the tested samples. The theoretical results successfully predict the trends observed in the experimental data. It is observed that the contribution of the radiation heat transfer is on the order of 50% of the total heat transfer for the tested enclosures. As such, a new correlation is reported for calculating optimum fin spacing in uniformly finned surfaces, with rectangular straight fins, that takes into account both natural convection and radiation.

Topics: Fins
Commentary by Dr. Valentin Fuster
2011;():277-283. doi:10.1115/IPACK2011-52179.

The continued demand for high performance electronic products and the simultaneous trend of miniaturization has raised the dissipated power and power densities to new unprecedented levels in electronic systems. Thermal management is becoming increasingly critical to the electronics industry to satisfy the increasing market demand for faster, smaller, lighter and more cost effective products. Utilization of waste heat for the purpose of cooling chip is a promising area for enhancing the thermal management and net energy efficiency of the system. This paper focuses on the development of a tubular microgrooved evaporator and its performance characterization based on heat transfer coefficients and pressure drop measurements. Channel with aspect ratio of 3:1 (channel width – 100 μm, channel height – 300 μm) microgrooved structure was used in the evaporator. The system has been tested with R134a as refrigerant for refrigerant flow rate range of 0.005–0.02 kg/s and water flow rate range of 0.25–0.65 kg/s. Very promising results has been obtained in preliminary investigation. Heat transfer coefficient as high as 13,500 W/m2k has been obtained which is almost five times higher than comparative state of art technologies. The associated pressure drop is quite modest and much less than state of the art conventional evaporators.

Commentary by Dr. Valentin Fuster
2011;():285-292. doi:10.1115/IPACK2011-52188.

RthJA (Junction-to-Ambient Thermal Resistance) for power device packages was measured and modeled in order to correlate the results from our experiments and simulations. The packages studied, including TO (Transistor Outline), DFN (Dual Flat Non-Leaded), SOP (Small Outline Package) and DPAK with sizes from 3×3mm to 15×10mm, were tested under natural convection environment. An important observation from our testing is the significant influence of the external wires connecting the test coupon to the power supply on the thermal resistance value derived from the test data. The increase in the RthJA based on the test is more than 50% for TO packages and 19% for smaller packages once the external wires changed from gauge 18 to 30. A simple yet effective simulation approach was then introduced to predict RthJA incorporating the critical influence from the external wire size variation. With the validated finite element model, the effects of package factors such as package outline, die size, die attach, encapsulation and interconnection on the thermal resistances of the power semiconductor packages were studied by simulation to provide further insights and guides for new package developments.

Commentary by Dr. Valentin Fuster
2011;():293-296. doi:10.1115/IPACK2011-52202.

An innovative temperature sensor has been presented based on “Faraday Effect”. The Faraday Effect or the Faraday rotation is a magneto-optical phenomenon; that is, an interaction between electromagnetic wave and a magnetic field in a medium. Optical sensors based on the Faraday Effect have the advantages of simplicity, high electrical insulation and immunity to electromagnetic interference. We will be making use of an optical fiber and a permanent magnet as our sensing elements. The magnet will be the sensing element for change in temperature and the fiber optic cable will sense the change in magnetic field intensity corresponding to the change in temperature.

Commentary by Dr. Valentin Fuster
2011;():297-304. doi:10.1115/IPACK2011-52204.

A low-cost (with bare chips), high cooling ability and very low pressure drop 3D IC integration system-in-package (SiP) is designed and described. This system consists of a silicon interposer with through-silicon vias (TSV) and embedded fluidic microchannels, which carries all the Moore’s law chips and optical devices on its top and bottom surfaces. TSVs in the Moore’s law chips are optional but should be avoided. This novel structural design offers potential solutions for high-power, high-performance, high pin-count, ultra fine-pitch, small real-estate, and low-cost 3D IC SiP applications.

Topics: Microchannels
Commentary by Dr. Valentin Fuster
2011;():305-309. doi:10.1115/IPACK2011-52218.

Today, the quality of LCD (Liquid Crystal Display) TVs has improved along with the quality of the installed LSI (Large Scale Integration) Thus, the cooling system needs to have high performance. However, LCD TV requires a large area but thin cooling system, so the TIM in which used in LCD TV requires highly softness. Thus we have developed high-quality yet soft thermal conductive sheets in which carbon fibers are directed in the thickness direction. The thermal conductivity of the 2-mm-thick sheets is more than 23 W/mK, and the compressibility is more than 10%. In this case the thermal conductivity was measured in accordance with ASTM D5470. The compressibility means the ratio of the difference between the initial thickness and the thickness when the sheets were loaded. The carbon fibers are more than 100 μm long and about 10 μm in diameter. This sheet contains alumina and aluminum nitride particles. The manufacturing process for the sheet is as follows. Step 1: The mixing process. Step 2: The resin including the carbon fibers and the particles is pressed into a long rectangular cast. Step 3: The resin is heated to harden it. Step 4: The resin is sliced into sheets. In step 1, because the carbon fibers are long, the fibers are likely affected by shear stress. Thus, the fibers are aligned in a lengthwise direction. In Step 4, we used a supersonic wave cutter to achieve ideal slicing, thereby reducing the thermal contact resistance. These processes produced high-quality yet soft thermal conductive sheets. In these processes, the carbon fibers aligned in the thickness direction, which was determined in an SEM observation. Moreover, we found that, by slicing in the orientation direction at inclining angles, only the softness improved, without any deterioration in the thermal conductivity.

Topics: Carbon fibers
Commentary by Dr. Valentin Fuster
2011;():311-318. doi:10.1115/IPACK2011-52225.

Increasing thermal design power (TDP) trends with shrinking form factor requirements creates the need for advanced cooling technology development. This investigation proposes multiple innovative water cooler technologies to achieve higher thermal performance liquid-cooling (LC) solutions addressing the limitations of air-cooling (AC). High performance water cooler design options will also meet the miniaturization trends of computing market by providing scalable solution to address smaller board real-estate. This investigation serves multi-fold advantages: 1) introduces four water cooler technologies employing different fin base plate designs, diamond fins, micro-fins, skived micro-fins, and twisted diamond fins, along with an optimized flow distribution path design accompanying each cooler, 2) provides scalable thermal solutions, 3) addresses particle clogging via fin base plate as well as flow distribution path optimization, 4) addresses galvanic corrosion by eliminating the use of two dissimilar metals and introducing acrylic housing, 5) introduces acrylic housing for weight management. Results show that twisted diamond fin, micro-fin and skived micro-fin coolers provide up to 5°C performance improvement resulting in lower pressure drop across water cooler compared to diamond fin cooler and about 37°C improvement compared to air-cooled active heatsink solution.

Topics: Cooling
Commentary by Dr. Valentin Fuster
2011;():319-326. doi:10.1115/IPACK2011-52229.

Driven by shrinking feature sizes, microprocessor hot spots have emerged as the primary driver for on-chip cooling of today’s IC technologies. Current thermal management technologies offer few choices for such on-chip hot spot remediation. A solid state germanium self-cooling layer, fabricated on top of the silicon chip, is proposed and demonstrated to have great promise for reducing the severity of on-chip hot spots. 3D thermo-electrical coupled simulations are used to investigate the effectiveness of a bi-layer device containing a germanium self-cooling layer above an electrically insulated silicon layer. The parametric variables of applied current, cooler size, silicon percentage, and total die thickness are sequentially optimized for the lowest hot spot temperature compared to a non-self-cooled silicon chip. Results suggest that the localized self-cooling of the germanium layer coupled with the higher thermal conductivity of the silicon chip can significantly reduce the temperature rise resulting from a micro-scaled hot spot.

Topics: Cooling , Germanium , Silicon
Commentary by Dr. Valentin Fuster
2011;():327-334. doi:10.1115/IPACK2011-52238.

IR thermography of the heated wall for the two-phase flow of FC-72 in microgap channels provides explicit evidence of the quality-driven M-shaped variations in the two-phase microgap heat transfer coefficients. Data obtained from a 210μ microgap channel, operated with an FC-72 mass flux of 195 and 780 kg/m2 -s and asymmetric heat fluxes of 28 W/cm2 to 35 W/cm2 are presented and discussed.

Commentary by Dr. Valentin Fuster
2011;():335-341. doi:10.1115/IPACK2011-52240.

Power on chip is highly temperature dependent in deep sub-micron VLSI. With increasing power density in modern 3D-IC and SiP, thermal induced reliability and performance issues such as leakage power and electromigration must be taken into consideration in the system level design. This paper presents a new methodology and its applications to accurately and efficiently predict power and temperature distribution for 3D ICs.

Commentary by Dr. Valentin Fuster
2011;():343-355. doi:10.1115/IPACK2011-52256.

As a promising candidate for advanced heat transfer fluids, nanofluids have been studied extensively during the past decade. In contrast to the early reports of dramatic heat transfer enhancement even at extremely low particle concentrations, the most recent studies suggest the laminar convective heat transfer of nanofluids is only mildly augmented and can be predicted by the conventional Navier-Stokes equations. The majority of the past studies were limited to water-based nanofluids synthesized from spherical nanoparticles. No systematic information is yet available for the convective heat transfer of nanofluids containing non-spherical particles, especially those formulated with the base fluid other than water. An experimental study was conducted in this work to investigate the thermophysical properties and convective heat transfer characteristics of Al2 O3 -Polyalphaolefin (PAO) nanofluids containing both spherical and rod-like nanoparticles. The effective viscosity and thermal conductivity were measured and compared to predictions from the effective medium theory. The friction factor and local Nusselt number were also measured for the laminar flow regime. It was found that established theoretical correlations can satisfactorily predict the experimental data for nanofluids containing spherical nanoparticles; however, they are less successful for nanofluids with nanorods. The possible reasons may be attributed to the shear-induced alignment of non-spherical nanoparticles and its subsequent influence on the development of the thermal boundary layer. The results suggest that the hydrodynamic interactions between the non-spherical nanoparticles and the surrounding fluid medium have a significant impact on the thermophysical properties as well as on the thermal transport characteristics of nanofluids.

Commentary by Dr. Valentin Fuster
2011;():357-365. doi:10.1115/IPACK2011-52262.

The unsteady laminar flow and heat transfer characteristics for a pair of angled confined impinging air jets centered in a channel were studied numerically. The time-averaged heat transfer coefficient for a pair of heat sources centered in the channel was determined, as well as the oscillating jet frequency for the unsteady cases. The present study is a continuation of the authors’ previous investigations, identifying the similarities and differences arising from the expansion to the third dimension. It examines the interaction between the angled jets and the associated impact on the cooling of the heat sources placed on the board at a jet Reynolds number of 100 and 600. Maintaining the inlet jet width, W, at 1 cm, as in the previous studied cases, the interaction between the 45° angled jets leads to the formation of unsteady symmetrical jets that impinge on the two heat sources placed on the board at a Reynolds number of 100. A second case investigates the hydrodynamic interaction between the 45° angled jets at a Reynolds number of 600. In this case the jets interact and form a region of unsteady shear causing the jets to sweep the target board and the heated components placed on it. The nature of this unsteadiness depends on the proximity of the jet inlets, the channel dimensions and the jet Reynolds number. The jet unsteadiness causes the stagnation point locations to sweep back and forth over the impingement region causing the jets to “wash” a larger surface area on the target wall. The relevant trends for the 2D and 3D jet hydrodynamic and thermal fields are further documented by comparing the field plots and the Nusselt numbers on the target walls for the cases under evaluation. Although similar in nature, the unsteady 3D opposite jets produce results that deviate from the 2D unsteady opposite jets. The complex vortex patterns resulting from the jet interaction at various jet inlet locations, as well as the velocity, vorticity and temperature fields for both 2D and 3D cases are thoroughly evaluated.

Commentary by Dr. Valentin Fuster
2011;():367-375. doi:10.1115/IPACK2011-52267.

A high-density and slim-type packaging technology in a notebook PC or a handheld PC has been developed as the importance of portability is increased more and more recently. The heat generated in small-sized electronic units should be dissipated effectively for operational stability during system lifetime. Considering the technical trend for miniaturized packaging of components, which requires very limited space, installed in the system; it is inevitable to develop and apply a micro-cooling device. In the present study, a very thin cooling device which operates on the basis of piezoelectricity has been introduced. First, the operation principal of Piezo fan is explained and then this new type of Piezo fan is introduced. Performance test results on thin laptop thermal solution module combined with this new Piezo fan is investigated. The original thermal solution module was composed of thin heat spreader (1 mm thick) and thin heat pipe (less than 2 mm) with finned heat sink at the condenser cooled by mini cooling fan (brushless mini fan). The mini fan is replaced by this new type of much thinner Piezo fan and then performance is studied and the results are compared with thin cooling module when cooled by mini cooling fan. In addition, this work consists of various developments that have been conducted to improve the performance of this Piezo fan that includes enhancement of cooling performance and reduction of acoustic noise.

Topics: Heat pipes
Commentary by Dr. Valentin Fuster
2011;():377-384. doi:10.1115/IPACK2011-52288.

It is a common practice in electronic packaging to deploy onboard temperature sensing ICs for thermal health monitoring and control. The IT equipment industry has seen exponential increase in power and power density growth on devices and PCBs. In turn, more and more IC temperature sensors are used in highly complex algorithms and are expected to be highly accurate in predicting the local thermal conditions. In many cases they are even used to correlate to air temperature. However, care must be taken in understanding the different factors that influence the temperature readings of these devices. Some of the factors that have direct impact on the quality of the temperature reading include parasitic heating due to adjacent components and placement location, airflow condition, circuit design in connecting these devices to the board, accuracy and tolerance of these devices. In addition, because of the increase in component power density, the temperature difference between the device temperature, for example junction temperature, and board sensor temperature can be very different and the range can vary a lot as well. In this paper, thermal numerical modeling, as well as empirical work at the system and board levels, was performed to understand the implications of the temperature readings from these devices. Several of the commercially available onboard temperature sensing ICs are compared as well. It is the intention of this work to point out these areas in order for thermal and system design practitioners to intelligently use these devices appropriately. Also, a high-level environmental monitoring and control system (EMCS) policy is illustrated for highly configurable multi-board equipment.

Topics: Temperature
Commentary by Dr. Valentin Fuster

Data Centers and Energy Efficient Electronic Systems

2011;():385-394. doi:10.1115/IPACK2011-52003.

The work presented in this paper describes a simplified thermodynamic model that can be used for exploring optimization possibilities in air-cooled data centers. The model is used to parametrically evaluate the total energy consumption of the data center cooling infrastructure for data centers that utilize aisle containment. The analysis highlights the importance of reducing the total power required for moving the air within the CRACs, the plenum, and the servers, rather than focusing primarily or exclusively on reducing the refrigeration system’s power consumption and shows the benefits of bypass recirculation in enclosed aisle configurations. The analysis shows a potential for as much as a 57% savings in cooling infrastructure energy consumption by utilizing an optimized enclosed aisle configuration with bypass recirculation, instead of a traditional enclosed aisle, where all the data center exhaust is forced to flow through the computer room air conditioners (CRACs), for racks with a modest temperature rise (∼10°C). However, for racks with larger temperature rise (> ∼20°C), the saving are less than 5%. Furthermore, for servers whose fan speed (flow rate) varies as a function of inlet temperature, the analysis shows that the optimum operating regime for enclosed aisle data centers falls within a very narrow band and that power reductions are possible by lowering the uniform server inlet temperature in the enclosed aisle from 27°C to 22°C. However, the optimum CRAC exit temperature over the 22-to-27°C range of enclosed cold aisle temperature falls between ∼16 and 20°C because a significant reduction in the power consumption is possible through the use of bypass recirculation. Without bypass recirculation, the power consumption for a server inlet temperature of 22°C enclosed aisle case with a server temperature rise of 10°C would be a whopping 43% higher than with bypass recirculation. It is worth noting that, without bypass recirculation maintaining the enclosed cold aisle at 22°C instead of 27°C would reduce power consumption by 48%. It is also shown that enclosing the aisles together with bypass recirculation (when beneficial) also reduces the dependence of the optimum cooling power on server temperature rise.

Commentary by Dr. Valentin Fuster
2011;():395-404. doi:10.1115/IPACK2011-52004.

The work presented in this paper describes a simplified thermodynamic model that can be used for exploring optimization possibilities in air-cooled data centers. The model has been used to identify optimal, energy-efficient designs, operating scenarios, and operating parameters such as flow rates and air supply temperature. The model is used to parametrically evaluate the total energy consumption of the data center cooling infrastructure, by considering changes in the server temperature rise. The results of this parametric analysis highlight the important features that need to be considered when optimizing the operation of air-cooled data centers, especially the trade-off between low air supply temperature and increased air flow rate. The analysis is used to elucidate the deleterious effect of temperature non-uniformity at the inlet of the racks on the data center cooling infrastructure power consumption. A recirculation non-uniformity metric, θ, is introduced, which is the ratio of the maximum recirculation of any server to the average recirculation of all servers. The analysis of open-aisle data centers shows that as the recirculation non-uniformity at the inlet of the racks increases, optimal operation tends toward lower recirculation and higher power consumption; stressing the importance of providing as uniform conditions to the racks as possible. Cooling infrastructure energy savings greater than 40% are possible for a data center with uniform recirculation (θ = 0) compared to a data center with a typical recirculation non-uniformity (θ = 4). It is also revealed that servers with a modest temperature rise (∼10°C) have a wider latitude for cooling optimization than those with a high temperature rise (≥20°C).

Commentary by Dr. Valentin Fuster
2011;():405-414. doi:10.1115/IPACK2011-52005.

The work presented in this paper is an extension of the companion work by the authors on a simplified thermodynamic model for data center optimization, in which a recirculation non-uniformity metric, θ, was introduced and used in a parametric analysis to highlight the deleterious effect of recirculation non-uniformity at the inlet of racks on the data center cooling infrastructure power consumption. In this work, several studies are done using a commercial computational fluid dynamics (CFD) package to verify many of the assumptions necessary in the development of the simplified model and to understand the degree of recirculation non-uniformity present in typical data center configurations. A number of CFD simulations are used to quantify the ability of the simple model at predicting θ. The results show that the simple model provides a fairly accurate estimate of θ, with a standard deviation in the prediction error of ∼10–15%. The CFD analysis are also to understand the effect of row length and server temperature rise (ΔTs ) temperature non-uniformity. The simulations show that reasonable values of θ range from 2–6 for open aisle data centers depending on operating strategy and data center layout. As a means to understand the effect of buoyancy, a data center Archimedes number (Ar), the ratio of buoyancy to inertia forces, is introduced as a function of tile flow rate and server temperature rise. For servers with modest temperature rise (∼ 10.0°C), Ar is ∼0.1; however, for racks with large temperature rise (∼20°C), Ar > 1.0, meaning buoyancy needs to be considered important. Through CFD analysis the significant effect buoyancy has on the inlet rack temperature patterns is highlighted. The Capture Index (ψ), the ratio of cold air ingested by the racks to the required rack flow, is used to investigate its relationship to the ratio of server flow to tile flow (Y ), as the inlet rack temperature patterns are changed by increased Ar. The results show that although the rack inlet temperature patterns are extremely different, ψ does not change significantly as a function of Ar. Lastly, the effect of buoyancy on the assumption of linearity of the temperature field is considered for a range of Ar. The results show the emergence of a stratified temperature pattern at the inlet of the racks as Ar increases and buoyancy becomes more important. It is concluded that under these conditions, a δT change in tile temperature does not produce a δT change in temperature everywhere in the field.

Commentary by Dr. Valentin Fuster
2011;():415-422. doi:10.1115/IPACK2011-52016.

Conducting experiments on real high-density computer servers can be an expensive and risky task due to the risks associated with unintended inlet temperatures that exceed the server’s red-line temperature limit. Presented herein is the development of the simulated chassis that mimic real computer servers. Briefly, twelve high-power simulated chassis were designed and built to accurately simulate the actual operating conditions of a real computer chassis in a data center. Each simulated chassis is designed to have approximately 300 Pa pressure drop at a flow rate of 600 cfm to represent a real IBM server chassis. Additionally, the simulated chassis are designed to match the thermal mass of a real server. Eight of the simulated chassis were designed to have constant speed fans and variable heating power while the remaining four chassis were designed to have variable speed fans and variable heating power. Further discussions about the design phase of the simulated chassis are the substantial part of this paper. Underlining the challenges and safety issues with high-power chassis, guidelines for designing and constructing a chassis that simulates the real environment of a typical data center are presented.

Commentary by Dr. Valentin Fuster
2011;():423-432. doi:10.1115/IPACK2011-52029.

We developed a Proper Orthogonal Decomposition (POD) based dynamic reduced order model that can predict transient temperature field in an air-cooled data center. A typical data center is modeled as a turbulent convective thermal system with multiple length scales. A representative case study is presented to validate the developed methodology. The model is observed to be capable of predicting the transient air temperature field accurately and rapidly. Comparing with the computational fluid mechanics/heat transfer (CFD/HT) based model, it is revealed that our model is 100x faster without compromising solution accuracy. The developed modeling framework is potentially useful for designing a control system that can regulate flow parameters in a transient data center.

Commentary by Dr. Valentin Fuster
2011;():433-442. doi:10.1115/IPACK2011-52030.

The power consumption of the chip package is known to vary with operating temperature, independently of the workload processing power. This variation is commonly known as chip leakage power, typically accounting for ∼10% of total chip power consumption. The influence of operating temperature on leakage power consumption is a major concern for the IT industry for design optimization where IT system power densities are steadily increasing and leakage power expected to account for up to ∼50% of chip power in the near future associated with the reducing package size. Much attention has been placed on developing models of the chip leakage power as a function of package temperature, ranging from simple linear models to complex super-linear models. This knowledge is crucial for IT system designers to improve chip level energy efficiency and minimize heat dissipation. However, this work has been focused on the component level with little thought given to the impact of chip leakage power on entire data center efficiency. Studies on data center power consumption quote IT system heat dissipation as a constant value without accounting for the variance of chip power with operating temperature due to leakage power. Previous modeling techniques have also omitted this temperature dependent relationship. In this paper we discuss the need for chip leakage power to be included in the analysis of holistic data center performance. A chip leakage power model is defined and its implementation into an existing multi-scale data center energy model is discussed. Parametric studies are conducted over a range of system and environment operating conditions to evaluate the impact of varying degrees of chip leakage power. Possible strategies for mitigating the impact of leakage power are also illustrated in this study. This work illustrates that when including chip leakage power in the data center model, a compromise exists between increasing operating temperatures to improve cooling infrastructure efficiency and the increase in heat load at higher operating temperatures due to leakage power.

Commentary by Dr. Valentin Fuster
2011;():443-449. doi:10.1115/IPACK2011-52066.

Energy consumption in data center has been drastically increasing in recent years. In data center, server racks are cooled down by air conditioning for the whole room in a roundabout way. This air cooling method is inefficient in cooling and it causes hotspot problem that IT equipments are not cooled down enough, but the room is overcooled. On the other hand, countermeasure against the heat of the IT equipments is also one of the big issues. We therefore proposed new liquid cooling systems which IT equipments themselves are cooled down and exhaust heat is not radiated into the server room. For our liquid cooling systems, three kinds of cooling methods have been developed simultaneously. Two of them are direct cooling methods that the cooling jacket is directly attached to heat source, or CPU in this case. Single-phase heat exchanger or two-phase heat exchanger is used as cooling jackets. The other is indirect cooling methods that the heat generated from CPU is transported to the outside of the chassis through flat heat pipes, and condensation sections of the heat pipes are cooled down by liquid. Verification tests have been conducted by use of real server racks equipped with these cooling techniques while pushing ahead with five R&D subjects which constitute our liquid cooling system, which single-phase heat exchanger, two-phase heat exchanger, high performance flat heat pipes, nanofluids technology, and plug-in connector. As a result, the energy saving effect of 50∼60% comparing with conventional air cooling system was provided in direct cooling technique with single-phase heat exchanger.

Commentary by Dr. Valentin Fuster
2011;():451-459. doi:10.1115/IPACK2011-52067.

An effective method of reducing the temperature gradient in data centers (or “hot” and “cold” spots) is to minimize the maximum temperature at the rack exit plane. For a given rack’s heat load, one way to achieve this is to vary the chassis flow rate in such a way that the rack exit temperature gradient is small. In this paper, a simple algorithm based on the energy equation is developed to set the chassis flow rate such that the exit rack temperature is uniform. When compared to a constant chassis flow rate algorithm, the proposed method is demonstrated through CFD simulations to achieve significant reduction in maximum chassis inlet temperature for the same CRAH flow rate.

Commentary by Dr. Valentin Fuster
2011;():461-470. doi:10.1115/IPACK2011-52080.

Energy efficiency in data center operation depends on many factors, including power distribution, thermal load and consequent cooling costs, and IT management in terms of how and where IT load is placed and moved under changing request loads. Current methods provided by vendors consolidate IT loads onto the smallest number of machines needed to meet application requirements. This paper’s goal is to gain further improvements in energy efficiency by also making such methods ‘spatially aware’, so that load is placed onto machines in ways that respect the efficiency of both cooling and power usage, across and within racks. To help implement spatially aware load placement, we propose a model-based reinforcement learning method to learn and then predict the thermal distribution of different placements for incoming workloads. The method is trained with actual data captured in a fully instrumented data center facility. Experimental results showing notable differences in total power consumption for representative application loads indicate the utility of a two-level spatially-aware workload management (SpAWM) technique in which (i) load is distributed across racks in ways that recognize differences in cooling efficiencies and (ii) within racks, load is distributed so as to take into account cooling effectiveness due to local air flow. The technique is being implemented using online methods that continuously monitor current power and resource usage within and across racks, sense BladeCenter-level inlet temperatures, understand and manage IT load according to an environment’s thermal map. Specifically, at data center level, monitoring informs SpAWM about power usage and thermal distribution across racks. At rack-level, SpAWM workload distribution is based on power caps provided by maximum inlet temperatures determined by CRAC speeds and supply air temperature. SpAWM can be realized as a set of management methods running in VMWare’s ESXServer virtualization infrastructure. Its use has the potential of attaining up to 32% improvements on the CRAC supply temperature requirement compared to non-spatially aware techniques, which can lower the inlet temperature 2∼3° C, that is to say we can increase the CRAC supply temperature 2∼3° C to save nearly 13% −18% cooling energy.

Commentary by Dr. Valentin Fuster
2011;():471-477. doi:10.1115/IPACK2011-52088.

A software tool was developed to predict the transient cooling performance of data centers and to explore various alternatives in real-time for data center design and management purposes. Cooling performance can be affected by factors such as room architecture, rack population and layout, connections between cooler fans and UPSs, chilled water pumps and UPSs, the size of the chilled water storage tank, etc. The available transient cooling runtime is mainly dictated by the system stored cooling capacity and the total load in the data center. This paper discusses the transient response of data centers to different design and failure scenarios and details a comprehensive and efficient approach for simulating this performance.

Commentary by Dr. Valentin Fuster
2011;():479-488. doi:10.1115/IPACK2011-52090.

Data Center efficiency is critical to successful operation of today’s large IT installations. The reduction of infrastructure energy use will allow an increase in IT carrying capacity and / or a reduction in operating costs. The cooling portion of the data center Power Usage Effectiveness (PUE) can represent a significant cost and energy burden to the data center. The use of containment hardware (hot aisle, chimney, or cold aisle containment) is a good step in reducing the Data Center PUE; however the specifics of the implementation remains a challenge and some legacy controls strategies limit the efficacy of their use. The most typical control scheme in today’s data center is the return airflow temperature modulating a linked supply temperature and airflow. This control scheme is unsuitable for an advanced data center and limits the efficiency that can be gained with the containment strategy. But the optimal control scheme for a containment strategy remains a matter of discussion and debate. This paper reports on testing performed at our collaborative data center test lab facility in Munich, Germany where we have explored three different control designs for a containment strategy. The primary goal for energy savings in a containment strategy is to provide just enough cool air to the servers such that the server fans are satisfied without causing any recirculation to occur from the hot side of the containment. We investigated control based on temperature, pressure, and velocity measurements. The specifics of each are discussed as well as recommendations for choosing the appropriate controls. Practical considerations as well as system implementation recommendations are also shared. Each strategy can be made to work but the pressure control scenario provided the best level of control.

Commentary by Dr. Valentin Fuster
2011;():489-496. doi:10.1115/IPACK2011-52114.

This paper discusses the preliminary design of a controller for the cooling air conditioning units (CRAC) fan speed based on server loads. The CRAC is paired with a rear door heat exchanger. The main purpose in implementing this controller is the reduce energy consumption and to have finer control in the provision of cooling within a data center. Tools that will be essential for developing this controller are CRACs with variable frequency drives (VFD), BCMS power monitoring, data archiving software (PI from OSISOFT) and MATLAB. For this experiment, the zone of interest in the data center consists of 10 racks of IBM Blade Centers with a maximum electrical load of approximately 24 kW each and a total of 3360 nodes. Results are compared with the conventional cooling method to quantify energy savings and corresponding chiller coefficient of performance (COP).

Commentary by Dr. Valentin Fuster
2011;():497-504. doi:10.1115/IPACK2011-52117.

In this paper we experimentally investigate the air flow distribution at the inlets of two opposing racks in a cold aisle. The racks have non uniform heat load and air flow requirements creating a heterogeneous data center environment. The Computer Room Air Conditioning (CRAC) unit fan speed is set to meet the air requirement of the high heat density rack. The effect of perforated tile air velocity on air distribution to both the racks is studied using particle image velocimetry (PIV) technique. PIV images are recorded at various rack heights corresponding to the locations of the servers in the rack. The PIV images recorded at various locations are stitched to construct a complete rack inlet air flow map. Three cases of rack air flow distributions are studied by varying the server work load and perforated tile flow rate. A significant change in the air distribution pattern is observed for the three cases investigated.

Topics: Data centers
Commentary by Dr. Valentin Fuster
2011;():505-510. doi:10.1115/IPACK2011-52127.

Typical data center architectures utilize a raised floor; cooling airflow is pumped into an under-floor plenum and exits through perforated floor tiles located in front of IT equipment racks. The under-floor space is also a convenient place to locate critical building infrastructure, such as chilled-water piping and power and network cabling. Unfortunately, the presence of such objects can disrupt the distribution of cooling airflow. While the effects of other design parameters, such as room layout, plenum depth, perforated tile type, and leakage paths, have been systematically studied — and corresponding best-practices outlined, there is no specific advice in the literature with regard to the effect of under-floor infrastructure on airflow distribution. This paper studies the effects of such obstructions primarily through CFD analyses of several layouts based on actual facilities. Additionally, corresponding scenarios are analyzed using a Potential Flow Model (PFM), which includes a recently-proposed obstruction-modeling technique. It is found that under-floor obstructions significantly affect airflow distribution only when they are located very near perforated tiles and cooling units and occupy a substantial fraction of the total plenum depth.

Commentary by Dr. Valentin Fuster
2011;():511-517. doi:10.1115/IPACK2011-52128.

We present Avatar, a data center environmental advisory system for raised floor data centers. Using limited information such as inlet and reference temperatures for IT equipment and basic floor plan geometry, Avatar produces recommendation to adjust the operation of computer room air conditioners (CRACs) and the configuration of vent tiles in a data center so as to reducing excess provisioning of cooling and to remove hot spots. Avatar reduces operating expenses by cooling the same load with less energy. Avatar reduces capital expenses by recovering stranded cooling capacity that would otherwise have to be replaced.

Topics: Data centers
Commentary by Dr. Valentin Fuster
2011;():519-525. doi:10.1115/IPACK2011-52132.

Airside economizers in data centers introducing outside air directly in cold aisles or at CRAC level have been considered recently to reduce overall energy to cool IT equipment. However, such designs limit the operational envelope of free cooling based on the required supply air temperature to the IT equipment. More studies are required to optimize airside economizer layouts to increase the operation time and hence, increase the energy savings. This paper presents a case study of different outside air delivery configurations including outside air introduced in cold aisles, in plenum close to CRAC units’ supply side, at return side of CRAC units and in hot aisles. The temperature and flow fields are studied numerically and are compared to each other. Mixing of the cooler outside air with the hot air is studied to determine optimal local distribution of the outside air in a non-homogeneous data center to maximize natural cooling. The paper also quantifies the annual average performance of the outside air infrastructure to include the effects of the seasonal variations in the ambient temperature.

Commentary by Dr. Valentin Fuster
2011;():527-534. doi:10.1115/IPACK2011-52136.

Potential-flow-based airflow and heat transfer models have been proposed as a computationally efficient alternative to the Navier-Stokes Equations for predicting the three-dimensional flow field in data center applications. These models are simple, solve quickly, and capture much of the fluid flow physics, but ignore buoyancy and frictional effects, i.e., rotationality, turbulence, and wall friction. However, a comprehensive comparison of the efficiency and accuracy of these methods versus more sophisticated tools, like CFD, is lacking. The main contribution of this paper is a study of the performance of potential-flow methods compared to CFD in eight layouts inspired by actual data center configurations. We demonstrate that potential-flow methods can be helpful in data center design and management applications.

Commentary by Dr. Valentin Fuster
2011;():535-539. doi:10.1115/IPACK2011-52138.

Redundancy is an important measure of an operation’s ability to withstand planned or unplanned system failures. While this concept is commonly used in power systems, redundancy can be extended to data center cooling systems, as well. We propose a rack-based redundancy metric for cooling performance that is similar in nomenclature to metrics for power systems, but also captures the local nature of data center cooling. This paper will explain how to compute this metric for general data center layouts and show how cooling redundancy can influence design choices when used in combination with typical measures of cooling coverage: inlet temperature and Capture Index.

Commentary by Dr. Valentin Fuster
2011;():541-551. doi:10.1115/IPACK2011-52140.

Infrastructure efficiency metrics, such as Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE), have gained much traction within the industry for evaluating data center energy efficiency. Gradually, however, as the lines between traditional IT systems in the data center and the facility infrastructure supporting the data center get blurred, adaptations to the usage of infrastructure efficiency metrics will be required. This paper presents three cases where holistic data center energy efficiency does not necessarily track infrastructure efficiency, implicitly emphasizing the need for new metrics that address the IT-facility infrastructure in holistic fashion.

Commentary by Dr. Valentin Fuster
2011;():553-563. doi:10.1115/IPACK2011-52141.

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand ), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.

Commentary by Dr. Valentin Fuster
2011;():565-576. doi:10.1115/IPACK2011-52159.

In this paper, we present the Daffy data model and messaging framework for data centers. The model is a hybrid model, combining physical, structural, geometrical, and logical modeling techniques. The messaging scheme has excellent performance and allows for the loose coupling of the various framework components. The framework bridges the gap between facilities and IT domains and enables the holistic, cross-domain management of data centers. The framework supports rich visualization, cross-domain queries and sophisticated cross-domain autonomic control systems.

Topics: Data centers
Commentary by Dr. Valentin Fuster
2011;():577-583. doi:10.1115/IPACK2011-52165.

The inevitable increase in the heat dissipation of data center facilities is requiring more efficient approaches in the operation of a data center. Dynamic cooling has been proposed as the approach for enhancing the energy efficiency. Dynamic cooling involves close monitoring of the data center environment with time, using sensors, and taking real time decisions on allocating the cooling resources based on the location of hotspots and concentration of workloads. In order to address this approach, knowing the time it takes for a facility to reach steady state after any variation is crucial for ensuring safe operation of the electronic equipment at all times, and it is a function of thermal mass. The thermal mass of an object is the amount of mass capable of withholding heat, and the time it takes to dissipate that heat into the environment is a function of the material properties. In this study, we use a typical 2U server and explain a procedure in obtaining its thermal mass. The server is operated at different controlled power levels while measurements of fans speed, component temperatures, and inlet and outlet temperatures are taken with time. For the first set of experiments, the server is kept inside a chamber and for the second set it is kept in open space. Ultimately the experimental measurements obtained will be used to obtain a compact model to approximate thermal mass of different servers.

Topics: Measurement
Commentary by Dr. Valentin Fuster
2011;():585-593. doi:10.1115/IPACK2011-52166.

The increase in computing performance of electronic equipment is causing higher power consumption and in turn higher heat dissipation. On the larger scale, data centers that house hundreds of these individual electronic equipment are foreseeing an inevitable increase in heat dissipation on a facility level. The cooling cost of these rooms is becoming a major challenge, where a specified inlet temperature must not be exceeded in order to insure the safe and reliable operation of the electronic equipment. Most of the data centers in use today adopt the hot aisle-cold aisle configuration, where air is supplied through a plenum. The major problem with this configuration is the mixing of cold supply air from the tiles with hot exhaust air from the servers. This affects the efficiency of the cooling infrastructure and in turn affects the cost of operation. This study concentrates on the idea of sealing the cold aisle. Completely sealing the cold aisle is not practical due to pressure and noise problems, and therefore a perforated ceiling is proposed. A parametric study is generated to look at the effects of perforated ceiling resistance and different CRAC (Computer Room Air Conditioning) airflow supply percentage on the inlet temperature of the racks. A metric is proposed to quantify the variation of inlet temperature across the height of a rack and for a given cold aisle.

Topics: Cooling
Commentary by Dr. Valentin Fuster
2011;():595-604. doi:10.1115/IPACK2011-52167.

Thermal Management optimization for data centers, including prediction of airflow and temperature distributions, is generally an extremely time-consuming process using full-scale CFD analysis. Reduced order models are necessary in order to provide real-time assessment of cooling requirements for data centers. The use of a simulation-based Artificial Neural Network (ANN) is being investigated as a predictive tool. A model for a basic hot aisle/cold aisle data center configuration was built and analyzed using the commercial software FloTHERM. The flow field and temperature distributions were obtained for 100 representative sets of operating conditions using the CFD package. The Latin Hypercube Sampling technique was employed to select values for three design variables: plenum height, percentage open area of the perforated tiles and air leakage fraction. The FloTHERM results were used to generate a database for the ANN training. The CFD results from 85 cases were used for training and 16 cases were used for validation. A multivariate mapping between the input design variables and output variables (individual tile flow rates and maximum rack temperatures) was obtained. Good agreement (0.5% average relative error) was obtained between the ANN model predictions and the CFD results. These preliminary results are promising and show that an ANN based model may yield an effective real-time thermal management design tool for data centers.

Commentary by Dr. Valentin Fuster
2011;():605-610. doi:10.1115/IPACK2011-52192.

Although in most buildings the spatial allocation of cooling resources can be managed using multiple air handling units and an air ducting system, it can be challenging for an operator to leverage this capability, partially because of the complex interdependencies between the different control options. This is in particular important for data centers, where cooling is a major cost while the sufficient allocation of cooling resources has to ensure the reliable operation of mission-critical information processing equipment. It has been shown that thermal zones can provide valuable decision support for optimizing cooling. Such Thermal zones are generally defined as the region of influence of a particular cooling unit or cooling “source” (such as an air condition unit (ACU)). In this paper we show results using a statistical approach, where we leverage real-time sensor data to obtain thermal zones in realtime. Specifically, we model the correlations between temperatures observed from sensors located at the discharge of an ACU and the other sensors located in the room. Outputs from the statistical solution can be used to optimize the placement of equipment in a data center, investigate failure scenarios, and make sure that a proper cooling solution has been achieved.

Commentary by Dr. Valentin Fuster
2011;():611-616. doi:10.1115/IPACK2011-52206.

The increased focus on green technologies and energy efficiency coupled with the insatiable desire of IT equipment customers for more performance has driven manufacturers to deploy energy efficient technologies in the data centers. This paper describes a technique to achieve significant energy savings by preventing the cold and hot air streams within the data center from mixing. More specifically, techniques will be described that will separate the cool supply air to the server racks and exhaust hot air that returns to the air conditioning units. This separation can be achieved by three types of containment systems — cold aisle containment, hot aisle containment, and server rack exhaust chimneys. The advantages and disadvantages of each technique will be outlined. To show the potential for energy efficiency improvements a case study in deploying a cold aisle containment solution for a 8944 ft2 data center will be presented. This study will show that 59% of the energy required for the computer room air conditioning (CRAC) units used in a traditional open type data center could be saved.

Commentary by Dr. Valentin Fuster
2011;():617-622. doi:10.1115/IPACK2011-52207.

It is now common for data center managers to question the impact on server energy usage of two recent impact factors: (1) the rise in the data center inlet air temperature to servers per 2008 ASHRAE guidelines, and (2) the fan speed increase from the use of rack level heat exchangers such as Rear Door Heat Exchangers. To help acquire a deeper understanding of the relevant issues, a system floor thermal test was built on the IBM New York data center benchmark floor which consisted of a standard 19″ rack filled with 39 3.0 GHz 1U servers that dissipated between 10–17 kW depending on extent of server utilization. Fan speed, chip temperature metrics, and server power data was collected using product debug codes and server level programs. A simulated air heat load was installed right in front of this server rack to allow the manipulation of air inlet temperature into the servers from 20 °C to 32 °C. Two different rack level configurations were considered for the experiments: (i) a perforated front door and no door at the rear, and (ii) a perforated front door and a Rear Door Heat Exchanger at the rear. An exerciser program was used to vary the CPU utilization from Idle to 70% which represented a typical data center work load. Data was collected for 19 servers of the 39 servers (remaining were in use by Benchmark Lab) for the two rack configurations, for 4 inlet server air temperatures, and for two chip exerciser settings, i.e. 16 experiments. For the 70% exerciser setting (typical operation) and the base line rack configuration without rack level heat exchangers, the rise in server power for an increase in inlet air temperature was 5.2% for the 20 °C to 27 °C change and was 17% for the 20 °C to 31 °C change. For the 70% exerciser setting (typical), the increase in server power from the use of rack level heat exchangers (Rear Door Heat Exchanger) was less than 1.3% for all the conditions. Given the broad range of fan speed algorithms and cooling hardware in server products on the market and their change over each generation, significant further study will be required to characterize each category of systems for these conditions. However, the present study provides a template for quantifying server energy usage in a context that data center managers can understand and use.

Commentary by Dr. Valentin Fuster
2011;():623-627. doi:10.1115/IPACK2011-52210.

Information Technology (IT) data centers consume a large amount of electricity in the US and world-wide. Cooling has been found to contribute about one third of this energy use. The two primary contributors to the data center cooling energy use are the refrigeration chiller (about 50% of cooling) and the Computer Room Air Conditioning units (about 33% of cooling). This paper focuses on a data center configuration that eliminates the use of the chiller plant thereby yielding substantial energy savings. One method of eliminating the chiller plant is to directly pump outdoor air into a data center with some amount of conditioning (particulate filtration). This configuration is can be called Direct Air Side Economizer (ASE). Since computer equipment is usually designed with the assumption that the rack air inlet temperatures are in the 15–32 °C range, the use of ASE is constrained to use only in those geographies where the outdoor air conditions allow such direct air use. One method to reduce the sensible air temperature of the outdoor air that is being ducted into a data center room is water evaporation directly into the air stream. Such a method can be called Evaporative Air Side Economizer (EASE). This paper discusses the benefits of EASE data center configurations in the context of the climate in the USA and realizable energy savings compared with traditional chiller plant based cooling loops. Hour by hour outdoor air temperature data for a typical year and psychometric charts are utilized in conjunction with simple transfer functions to model cooling via evaporative media. Phoenix, a US city in a hot climate is used to illustrate the use of the relatively new method of data center cooling. A comparison to the traditional chiller plant based approach resulted in about 30% of energy savings at the data center level.

Commentary by Dr. Valentin Fuster
2011;():629-635. doi:10.1115/IPACK2011-52211.

Humidity control in buildings is important for several reasons ranging from ensuring comfort of the occupants to mold control. While the optimum humidity range can be different depending on the function of a particular facility, data centers require especially tight control of humidity and dew point. For example, at low humidity (electro-static discharge) ESD might impose a significant risk to the computing equipment while at high humidity levels hardware failures are more probable due to the growth of conductive filaments or corrosion of circuit boards. In this paper we present a detailed comparison of data centers from several geographical locations, where we have measured humidity and temperature distributions over extended periods of time. The data are analyzed in terms of spatial and temporal dew point variations. We derive detailed dew point “maps” of the respective data center and the impact on reliability and energy efficiency is being discussed.

Commentary by Dr. Valentin Fuster
2011;():637-644. doi:10.1115/IPACK2011-52213.

The concept of thermal zones has been proposed in recent years as a means for providing operational information regarding which physical areas (or zones) in a data center are being supplied by the different air conditioning units, in order to gain insights as to optimal use of cooling equipment [1, 2]. One methodology for computing such thermal zones consists of explicit tracing of streamlines of an air velocity field [2]. In the present work, we propose an alternative technique which does not require explicit tracing of streamlines. Specifically, the problem of identifying thermal zones is being formulated as a boundary value problem for convective transport. Using a numerical method capable of solving convection-dominated problems accurately leads to identification of the zones by a simple postprocessing of the numerical solution of the boundary value problem, making the procedure convenient to apply, especially in three-dimensional domains.

Commentary by Dr. Valentin Fuster
2011;():645-652. doi:10.1115/IPACK2011-52234.

Energy consumption has become a critical issue for data centers, triggered by the rise in energy costs, volatility in the supply and demand of energy and the wide spread proliferation of power-hungry information technology (IT) equipment. Since nearly half the energy consumed in a data center (DC) goes towards cooling, much of the efforts in minimizing energy consumption in DCs have focused on improving the efficiency of cooling strategies by optimally provisioning the cooling power to match the heat dissipation in the entire DC. However, at a more granular level within the DC, the large range of heat densities of today’s IT equipment makes this task of provisioning cooling power at the level of individual computer room air conditioning (CRAC) units much more challenging. In this work, we employ utility functions to present a principled and flexible method for determining the optimal settings of CRACs for joint management of power and temperature objectives at a more granular level within a DC. Such provisioning of cooling power to match the heat generated at a local level requires the knowledge of thermal zones — the region of DC space cooled by a specific CRAC. We show how thermal zones can be constructed for arbitrary settings of CRACs using the potential flow theory. As a case study, we apply our methodology in a 10,000 sq. ft commercial DC using actual measured conditions and evaluate the usefulness of the method by quantifying possible energy savings in this DC.

Commentary by Dr. Valentin Fuster
2011;():653-661. doi:10.1115/IPACK2011-52265.

With the current increase of the electrical power consumption of the data center imposed a great concern in the world about sustainable energy and global warming and it is therefore required innovative ideas to conserve the energy consumption. Authors propose the use of heat pipe which is a best known passive heat transfer device that is well suitable apply for energy saving system in the current data center cooling system. In this paper, a design and economics of the novel type of thermal control system for data center cooling using heat pipe based cold energy storage system has been proposed and discussed. The cold water storage system is explained and sized for data center with heat output capacity of 8,800 kW. Basically, the cold energy storage will help to downsize the chiller and decrease its runtime that will save electricity related cost and decrease green house gases emissions resulting from the electricity generation. The proposed cold energy storage system can be retrofit or connected in the existing data center facilities without major design changes. Water based cold energy storage system provides more compact size with short term storage (hours to days) and is potential for both small to large size data center with yearly average temperature below the cold storage water temperature (∼ 25 °C). The cold water storage system is sized on the basis of metrological conditions in North America, USA. As an outcome of the thermal and cost analysis, an optimum size of cold energy storage system should be designed to handle 60% of the yearly data center load. The proposed system can be easily integrated into the existing conventional systems without any significant infrastructure changes. Preliminary results obtained from the experimental system design to test the ice formation potential of the heat pipe based cold energy storage system has been shown good result and validated the proposed concept.

Commentary by Dr. Valentin Fuster

Materials and Processes

2011;():663-666. doi:10.1115/IPACK2011-52024.

An electroless Ni/Pd/Au plated electrode is expected to be used as an electrode material for lead-free solder to improve joint reliability. The aim of this study is to investigate the effect of the thickness of the Pd layer on joint properties of the lead-free solder joint with the electroless Ni/Pd/Au plated electrode. Solder ball joints were fabricated with Sn-3Ag-0.5Cu (mass%) lead-free solder balls and electroless Ni/Pd/Au and Ni/Au plated electrodes. Ball shear force and microstructure of the joint were investigated. The (Cu,Ni)6 Sn5 reaction layer formed in the joint interface in all specimens. The thickness of the reaction layer decreased with increasing the thickness of the Pd layer. In the joint with a Pd layer 0.36 μm thick, the remained Pd layer was observed in the joint interface. In the joint, impact shear force decreased compared with that of the joint without the remained Pd layer. On the contrary, when the thickness of the Pd layer was less than 0.36 μm, the Pd layer was not remained in the joint interface and impact shear force improved. Impact shear force of the joint with the electroless Ni/Pd/Au plated electrode was higher than that with the electroless Ni/Au one.

Commentary by Dr. Valentin Fuster
2011;():667-672. doi:10.1115/IPACK2011-52025.

In the joint with eutectic Sn-Pb solder and a lead-free Ni/Pd/Au electrode, degradation of mechanical properties of the solder due to dissolution of Au and Pd into the solder would be anxious. In this study, the effect of aging was investigated on tensile properties and microstructures of the eutectic Sn-Pb solder with small amounts of Au and Pd added. In as-cast solders with both Au and Pd added, the tensile strength increases with increasing contents of Au and Pd. A similar tendency was observed after aging at 100°C for 1000 h. The effect of aging on elongation was relatively small and elongation degraded when brittle (Pd,Au)Sn4 phases formed in the solder. In solders with Au ranging from 1 to 5 mass%, regardless of aging conditions investigated, the tensile strength is stable at approximately 50 MPa and 45 MPa before and after aging, respectively. The effect of aging on improvement of elongation was negligible and elongation degraded when rod shaped AuSn4 formed in the solder. On the basis of the result of microstructural observation, it was clarified that the strengthening by dispersion of (Pd,Au)Sn4 phases is superior to softening by microstructure coarsening upon aging when the contents of Au and Pd are 2 mass% and 1 mass% or above, respectively.

Commentary by Dr. Valentin Fuster
2011;():673-676. doi:10.1115/IPACK2011-52026.

The erosion behavior of plasma nitriding SUS304 stainless steel by molten lead-free solder was examined. Plasma nitriding treatment was conducted to the surface of SUS304 steel. The thickness of the nitriding layer was approximately 17 μm. The layer mainly consists of Fe4 N, CrN and Cr2 N. Erosion of nitriding SUS304 stainless steel was observed after erosion test with molten Sn-3Ag-0.5Cu (mass%) solder at 450°C for 100 h. On the basis of the result of microstructure observation, it was found that Sn diffusion into the nitrided layer occurred in non eroded area. The result shows that Sn diffusion into the nitrided layer induces erosion of plasma nitriding stainless steel.

Commentary by Dr. Valentin Fuster
2011;():677-684. doi:10.1115/IPACK2011-52048.

Both mechanical and electronic properties of electroplated copper films used for interconnections were investigated experimentally considering the change of their micro texture caused by heat treatment. The fracture strain of the film annealed at 400°C increased from about 3% to 15% and their yield stress decreased from about 270 MPa to 90 MPa. In addition, it was found that two different fatigue fracture modes appeared in the film. One was a typical ductile fracture mode and the other was brittle one. When the brittle fracture occurred, a crack propagated along weak or porous grain boundaries which were formed during electroplating. The brittle fracture mode disappeared after the annealing at 300°C. These results clearly indicated that the mechanical properties of electroplated copper thin films vary drastically depending on their micro texture. The electrical reliability of the electroplated copper yjin film interconnections was also investigated. The interconnections used for electromigration tests were made using by a damascene process. An abrupt fracture mode due to local fusion appeared in the as-electroplated interconnections. Since the fracture rate increased almost linearly with the square of the applied current density, this fracture mode was dominated by local Joule heating. It seemed that the local current concentration occurred around the porous grain boundaries. The life of the interconnections was improved drastically after the annealing at 400°C. This was because of the increase of the average grain size and the improvement of the quality of grain boundaries in the annealed interconnections. However, the stress-induced migration occurred in the interconnections annealed at 400°C. This was because of the high tensile residual stress caused by the constraint of the densification of the films during annealing by the surrounding oxide film. Therefore, it is very important to control the crystallographic quality of electroplated copper films for improving the reliability of thin film interconnections. The quality of the grain boundaries can be evaluated by applying an EBSD (Electron Back Scatter Diffraction) analysis. New two experimentally determined parameters are proposed for evaluating the quality of grain boundaries quantitatively. It was confirmed that the crystallographic quality of grain boundaries can be evaluated quantitatively by using the two parameters, and it is possible to estimate both the strength and reliability of the interconnections.

Commentary by Dr. Valentin Fuster
2011;():685-690. doi:10.1115/IPACK2011-52058.

Electroplated copper thin films have started to be applied to not only interconnections in printed wiring boards, but also thin film interconnections and TSV (Through Silicon Via) in semiconductor devices because of its low electric resistivity and high thermal conductivity. Thus, the electrical reliability of the electroplated copper interconnections was investigated experimentally. Self-made electroplated copper thin film interconnections were used for the evaluation. Electroplating conditions are as follows. The thin film interconnections were made by damascene process for electromigration tests. The applied current density during the test was varied from 1 MA/cm2 to 10 MA/cm2 . Abrupt fracture caused by the local fusion was often observed in the as-electroplated interconnections within a few hours during the test. Since there were a lot of porous grain boundaries in the as-electroplated thin films, the local high Joule heating should have caused the fusion at one of the porous grain boundaries. Actually, it was confirmed that the failure rate increased linearly with the square of the amplitude of the applied current density. However, the diffusion of copper atoms caused by electromigration was enhanced significantly when the film was annealed at 400°C. Many voids and hillocks were observed on their surfaces. This change of the fracture mode clearly indicates the improvement of the crystallographic quality of the annealed film. It was also observed that the stress-induced migration was activated substantially in the annealed film. Large voids and hillocks grew during the custody of the film even at room temperature without any application of current. This stress-induced migration was caused by the increase of residual tensile stress of about 200 MPa in the annealed film. It was also found that sulfur atoms segregated in the grown hillocks, though no sulfur atoms were found by EDX in the initial as-electroplated interconnections or other area in the annealed thin film interconnections. Thus, the hillock formation in the annealed interconnections was enhanced by the segregation of sulfur atoms. These sulfur atoms should have been introduced into the electroplated films during electroplating. Therefore, it is very important to control the micro texture, the residual stress and the concentration of sulfur in the electroplated copper thin film interconnections to assure the stable life, in other words, to eliminate their sudden brittle fracture and time-dependent degradation caused by the residual stress in the thin film interconnections.

Commentary by Dr. Valentin Fuster
2011;():691-700. doi:10.1115/IPACK2011-52070.

This paper presents the influence of the micro structure on the crack propagation in lead free solder joint. The author’s group have studied the Manson-Coffin’s law for lead free solder joint by using the isothermal fatigue test and FEM analytical approaches to establish the practicable evaluation of thermal fatigue life of solder joints, for example, for the Sn-Cu-Ni solder, because this solder is attracted from the aspect of the decrease of solder leach in the flow process and material cost. However, even if the same loading is given to the solder joints of BGA test piece, there was a large dispersion in the fatigue life. Even though the effect of the shape difference has been considered, the range of the dispersion could not been explained sufficiently. In the study, the fatigue crack propagation modes in the solder joints were investigated, and an internal fatigue crack mode and an interfacial fatigue crack mode were confirmed. And the tendency of a shorter on fatigue life in the interfacial fatigue mode was confirmed. To clarify the mechanism of these fatigue crack modes, the crystal grain size in the solder joints was investigated before the fatigue test and also after the test. Furthermore, the verification of the mechanism using FEM models considering the crystal grain size was carried out. First of all, each element in FEM models matching to the average crystal grain size was made. Second, the inelastic strain ranges in each FEM models were studied. As a result, it was shown that the influence of the crude density of the crystal grain to the fatigue crack progress can be evaluated. In addition, the micro structure of the solder joint of large-scale electronic devices is observed, and FEM model was made based on the observation result. As a result, it was shown that the influence of the directionality with the crystal grain to the fatigue crack progress can be evaluated.

Commentary by Dr. Valentin Fuster
2011;():701-705. doi:10.1115/IPACK2011-52072.

The effect of citric-acid surface modification on the bond strength of the solid-state bonded interface of tin and copper has been investigated by SEM observation of the interfacial microstructures and fractured surfaces. Citric-acid surface modification was carried out in a vacuum chamber at a bonding temperature of 383–473 K and a bonding pressure of 7 MPa (bonding time: 1800 s). The citric-acid surface modification decreased bonding temperature by 70 K at which bonded joints could be obtained and bond strength comparable with the base metal was achieved.

Topics: Copper , Bond strength
Commentary by Dr. Valentin Fuster
2011;():707-712. doi:10.1115/IPACK2011-52104.

The influence of the joint size on low cycle fatigue characteristic of Sn-Ag-Cu has been investigated by a micro size joint specimen fabricated using solder balls. Although the effect of joint size on crack initiation life is not obvious at 298K, the reduction of the joint size changes the cyclic strain hardening nature and the fracture behavior that induces the life reduction at 398K. With a decrease in size, the failure mechanism transforms from a transgranular fracture to an intergranular fracture at the high energy grain boundary that is formed by high angle boundary formation following dynamic recovery in whole of joint. Therefore, the failure life is greatly reduced as a complete failure occurs simultaneously with the crack initiation at the grain boundaries. This is more remarkable at higher temperatures.

Commentary by Dr. Valentin Fuster
2011;():713-718. doi:10.1115/IPACK2011-52107.

The influence of cyclic strain-hardening exponents on fatigue ductility exponents for Sn-Bi solid solution alloys and Sn-Ag-Cu microsolder joints was investigated. In Sn-Bi solid solution alloys, the fatigue ductility exponent in Coffin-Manson’s law was confirmed to increase with a decrease in the cyclic strain-hardening exponent. On the other hand, in the Sn-Ag-Cu miniature solder joint, the fatigue ductility exponent increases with a rise in temperature and strain holding. Thus, the fatigue ductility exponents are closely related to the cyclic strain-hardening exponent: the former increases due to the depression of the latter with a rise in temperature and increase in intermetallic compound particle size during strain holding. The results differ for the creep damage mechanism (grain boundary fracture), which is the main reason for the life depression in large-size specimens.

Commentary by Dr. Valentin Fuster
2011;():719-723. doi:10.1115/IPACK2011-52110.

Electromigration current densities in Cu and Al lines on a silicon die exceed 1.0 × 106 A/cm2 . However, solder joints can only withstand electromigration current densities below about 1.0 × 104 A/cm2 . Thus, electromigration in solder joints will become a problem in semiconductor packages in the near future. Previous studies demonstrated that Cu-core solder balls increased the electromigration lifetime and led to better current stability at temperatures below 423K. This is because electrons flow through the Cu cores, reducing the current density on the cathode side, which is where electromigration occurs. In the present study, we forcused on the reliability of solder joints in a combined environment by examining the effect of thermal cycle tests on the current in a new test sample. A new test sample for the evaluation of joining reliability by using Cu-core solder balls in a combined enbironment was made. In initial tests, this test sample exhibited similar results to those observed in previous studies. Cu-core solder balls subjected to cyclic testing at 233/398K and a current density of 1.0 × 104 A/cm2 exhibited lower reliabilities than when there was no current. Examination of cross-sections of the solder balls after reliability testing revealed that the combined environment accelerated growth of intermetallic compounds and cracks in the joining region. In a combined environment, Cu-core balls were converted into intermetallic compounds on the anode side. This phenomenon is thought to occur due to the different electrical resistivities of Cu-Sn intermetallic compounds.

Commentary by Dr. Valentin Fuster
2011;():725-732. doi:10.1115/IPACK2011-52115.

A Cu-cored solder joint is a micro-joint structure in which a Cu sphere is encased in solder. It results in a more accurate height and has low thermal and electrical resistance. In a previous paper, we examined the thermal fatigue life of a Cu-cored solder ball grid array (BGA) joint through actual measurements and crack propagation analysis. As a result, we found that the thermal fatigue life of a Cu-cored solder BGA joint is about twice as long as that of a conventional joint. In this paper, we describe the impact strength of a Cu-cored solder BGA joint determined by conducting an impact bending test. This test is a technique to measure the impact strength of a micro-solder joint. This method was developed by Yaguchi et al., and they confirmed that it is an easier and more accurate method of measuring impact strength than the board level drop test. First, we simulated the impact bending test by finite element analysis (FEA) and calculated solder strains of both Cu-cored solder joints and conventional joints. The results indicated that the maximum solder strain of a Cu-cored solder joint during the impact bending test was slightly smaller than that of a conventional joint. The solder volume of the Cu-cored solder joint was also smaller than that of a conventional joint. On the other hand, joint stiffness of the Cu-cored solder joint was larger than in a conventional joint. The former increases the solder strain of the Cu-cored solder joint, and the latter decreases it. By balancing these phenomena, it is possible to obtain a maximum solder strain in the Cu-cored solder joint that is slightly smaller than in a conventional joint. Based on these phenomena, the impact strength of the Cu-cored solder joint is predicted to be the same as or higher than that of a conventional joint. Therefore, we measured the impact strengths of a Cu-cored solder joint and a conventional joint using the impact bending test. As a result, we confirmed that the impact strength of the Cu-cored solder joint was the same as or higher than that of a conventional joint. Accordingly, a Cu-cored solder BGA joint is a micro-joint structure that makes it possible to improve thermal fatigue life without decreasing impact strength. Moreover, we investigated whether the use of Cu-cored solder in a flip-chip (FC) joint improved its reliability. As a result, we found that the stress of an insulating layer on a Si die surface was reduced by using a Cu-cored solder FC joint. This is because bending deformation of the Cu land occurs, and the difference in thermal deformation between the Si chip and the Cu land becomes small. Accordingly, the Cu-cored solder FC joint is a suitable structure for improving reliability of a low-strength insulating layer.

Commentary by Dr. Valentin Fuster
2011;():733-748. doi:10.1115/IPACK2011-52184.

The microstructure, mechanical response, and failure behavior of lead free solder joints in electronic assemblies are constantly evolving when exposed to isothermal aging and/or thermal cycling environments. Over the past several years, we have demonstrated that the observed material behavior variations of Sn-Ag-Cu (SAC) lead free solders during room temperature aging (25 C) and elevated temperature aging (50, 75, 100, 125, and 150 C) were unexpectedly large and universally detrimental to reliability. The measured stress-strain data demonstrated large reductions in stiffness, yield stress, ultimate strength, and strain to failure (up to 50%) during the first 6 months after reflow solidification. In addition, even more dramatic evolution was observed in the creep response of aged solders, where up to 100X increases were found in the steady state (secondary) creep strain rate (creep compliance) of lead free solders that were simply aged at room temperature. For elevated temperature aging at 125 C, the creep strain rate was observed to change even more dramatically (up to 10,000X increase). There is much interest in the industry on establishing optimal SAC-based lead free solder alloys that minimize aging effects and thus enhance thermal cycling and elevated temperature reliability. During the past year, we have extended our previous studies to include several doped SAC alloys (SAC-X) where the standard SAC alloys have been modified with small percentages of one or two additional elements (X). Materials under consideration include SAC0307-X, Sn-.7Cu-X, SAC305-X, SAC3595-X and SAC3810-X. Using dopants (e.g. Bi, In, Ni, La, Mg, Mn, Ce, Co, Ti, etc.) has become widespread to enhance shock/drop reliability, and we have extended this approach to examine the ability of dopants reduce the effects of aging and extend thermal cycling reliability. In the current paper, we concentrate on showing results for SACX™, which has the composition Sn-0.3Ag-0.7Cu-X with X = 0.1Bi. We have performed aging under 5 different conditions including room temperature (25 C), and four elevated temperatures (50, 75, 100 and 125). We have also extended the duration of aging considered in our experiments to up to 12 months of aging on selected alloys. Variations of the mechanical and creep properties (elastic modulus, yield stress, ultimate strength, creep compliance, etc.) have been observed. We have correlated the aging results for the doped SAX-X alloy with our prior data for the “standard” lead free alloys SACN05 (SAC105, SAC205, SAC305, SAC405). The doped SAC-X alloy shows improvements (reductions) in the aging-induced degradation in stiffness, strength, and creep rate when compared to SAC105, even though it has lower silver content. In addition, the doped SAC-X alloy has been observed to reach a stabilized microstructure more rapidly when aged. Mathematical models for the observed aging variations have been established so that the variation of the stress-strain and creep properties can be predicted as a function of aging time and aging temperature.

Commentary by Dr. Valentin Fuster
2011;():749-761. doi:10.1115/IPACK2011-52209.

In this work, the viscoplastic mechanical response of a typical underfill encapsulant has been characterized via rate dependent stress-strain testing over a wide temperature range, and creep testing for a large range of applied stress levels and temperatures. A specimen preparation procedure has been developed to manufacture 80 × 5 mm uniaxial tension test samples with a specified thickness of .5 mm. The test specimens are dispensed and cured with production equipment using the same conditions as those used in actual flip chip assembly, and no release agent is required to extract them from the mold. Using the manufactured test specimens, a microscale tension-torsion testing machine has been used to evaluate stress-strain and creep behavior of the underfill material as a function of temperature. Stress-strain curves have been measured at 5 temperatures (25, 50, 75, 100 and 125 C), and strain rates spanning over 5 orders of magnitude. In addition, creep curves have been evaluated for the same 5 temperatures and several stress levels. With the obtained mechanical property data, several viscoelastic and viscoplastic material models have been fit to the data, and optimum constitutive models for subsequent use in finite element simulations have been determined.

Commentary by Dr. Valentin Fuster
2011;():763-771. doi:10.1115/IPACK2011-52220.

During service, micro-cracks form inside solder joints, making a microelectronic package prone to failure particularly during a drop. Hence, the understanding of the fracture behavior of solder joints under drop conditions, synonymously at high strain rates and in mixed mode, is critically important. This study reports: (i) the effects of processing conditions (reflow parameters and aging) on the microstructure and fracture behavior of Sn-3.8%Ag-0.7%Cu (SAC387) solder joints attached to Cu substrates, and (ii) the effects of the loading conditions (strain rate and loading angle) on the fracture toughness of these joints, especially at high strain rates. A methodology for calculating critical energy release rate, GC , was employed to quantify the fracture toughness of the joints. Two parameters, (i) effective thickness of the interfacial intermetallic compounds (IMC) layer, which is proportional to the product of the thickness and the roughness of the IMC layer, and (ii) yield strength of the solder, which depends on the solder microstructure and the loading rate, were identified as the dominant quantities affecting the fracture behavior of the solder joints. The fracture toughness of the solder joint decreased with an increase in the effective thickness of the IMC layer and the yield strength of the solder. A 2-dimensional fracture mechanism map with the effective thickness of the IMC layer and the yield strength of the solder as two axes and the fracture toughness as well as the fraction of different fracture paths as contour-lines was prepared. Trends in the fracture toughness of the solder joints and their correlation with the fracture modes are explained using the fracture mechanism map.

Commentary by Dr. Valentin Fuster
2011;():773-780. doi:10.1115/IPACK2011-52221.

In this study, a novel architecture composed of uniformly distributed high melting phase (HMP, e.g. Cu) in a low melting phase (LMP, e.g. In) matrix, which can be produced via liquid phase sintering (LPS), is proposed to produce next generation thermal interface materials (TIMs) and interconnect (IC) materials. The LMP determines the shear compliance of these composites whereas the HMP determines its thermal and electrical conductivities. The volume fraction of In was optimized to produce a Cu-In solder with suitable mechanical, electrical and thermal properties for TIM and IC applications. Since, Cu and In react to form several Cu-In intermetallic compounds (IMCs), which may deteriorate the long-term performance of these solders, interfacial-layers of Au and Al2 O3 were applied on Cu to further improve the performance of the Cu-In solders. The effect of interfacial-layers on the reaction between Cu and In, during sintering at 160°C and during aging at 125°C, was studied and its impact on the mechanical, thermal and electrical properties was evaluated. Au interfacial layer (50∼200nm) quickly reacted with In to form AuIn2 IMC, which acted as a tenacious diffusion-barrier and slowed down the reactions between Cu and In. 8-monolayer thick Al2 O3 did not react with either Cu or In and inhibited reactions between Cu and In. During short-time sintering, the effect of interfacial layer on the thicknesses of IMCs was insignificant to affect the yield strength of the as-sintered composites. However, IMC layer thickened rapidly in the Cu-In composites without an interfacial-layer, which led to a drastic decrease in the volume fraction of unreacted In leading to an increase in the yield strength of the solder. On the other hand, the interfacial-layers effectively suppressed the growth of IMCs during aging and hence the yield strength of such composites increased at slower rates. Since, the IMCs formed at the interface radically affect the contact resistance, significant differences in the thermal and electrical conductivities were recorded for the solders with different interfacial-layers.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In