ASME Conference Presenter Attendance Policy and Archival Proceedings

2016;():V014T00A001. doi:10.1115/IMECE2016-NS14.

This online compilation of papers from the ASME 2016 International Mechanical Engineering Congress and Exposition (IMECE2016) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Emerging Applications of 3D Printing

2016;():V014T07A001. doi:10.1115/IMECE2016-66256.

Oak Ridge National Laboratory’s Additive Manufacturing Integrated Energy (AMIE) demonstration project leverages rapid innovation through additive manufacturing to connect a natural-gas-powered hybrid electric vehicle to a high-performance building designed to produce, consume, and store renewable energy. The AMIE demonstration project consists of a building and vehicle that were additively manufactured (3D-printed) using the laboratory’s big area additive manufacturing (BAAM) capabilities and an integrated energy system with smart controls that connects the two via wireless power transfer. The printed utility vehicle features a hybrid electric powertrain with onboard power generation from a natural gas fueled auxiliary power unit (APU). The APU extends vehicle range through a series hybrid powertrain configuration that recharges the vehicle’s lithium-ion energy storage system and acts as a mobile power generation system for the printed building. The development of the powertrain used for the printed range-extended electric vehicle was completed using a powertrain-in-the-loop development process and the vehicle prototype implementation was accelerated using BAAM. A flexible 3.2 kW solar photovoltaic system paired with electric vehicle batteries will provide renewable power generation and storage. Energy flows back and forth between the car and house using fast, efficient bidirectional wireless power transfer. The AMIE project marked the first demonstration of bidirectional level 2 charging through wireless power transfer. The accelerated creation and printing of the car and house will further demonstrate the program’s function as an applied science tool to get products to market more quickly than what currently is possible with traditional manufacturing. This paper presents a case study that summarizes the efforts and technical details for using the printed research platforms. This paper explores the focuses on printing of the vehicle, powertrain integration, and possibilities for vehicles providing power to buildings in different scenarios. The ability for BAAM to accelerate the prototype development for the integrated energy system process is explored. Details of how this was successfully accomplished in 9 months with more than 20 industry partners are discussed.

Commentary by Dr. Valentin Fuster
2016;():V014T07A002. doi:10.1115/IMECE2016-66320.

Traditional thermal management techniques such as air-cooled plate- and pin-fin heat sinks are today being pushed to their limits by the increasing power densities of computing hardware (power supplies, controllers, processors, and integrated circuits). In comparison, direct immersion cooling within an alternative cooling medium such as commercial dielectric fluids offers the ability to handle high power densities while also accommodating tighter printed circuit board spacing. Together, these attributes are critical to facilitating higher computing densities. However, this type of high density setup also requires that any heat sink present be low profile so as to not obstruct adjacent printed circuit boards. Such a stringent limit on heat sink height can make achieving cooling targets challenging with existing designs.

In this work, the performance of several low profile (height less than 6 mm) heat sinks of varying design are evaluated within a carefully controlled direct immersion cooling environment. Commercial copper heat sinks fabricated through conventional manufacturing (CM) approaches serve as baselines for these performance tests. These same heat sink designs are also replicated via additive manufacturing (AM) utilizing a conductive, carbon-filled printable polylactic acid (PLA) composite material. The performance of these AM heat sinks are then compared to the CM heat sinks, with special emphasis on differences in thermal conductivity between the constituent materials. Finally, novel bio-inspired heat sink designs are developed which would be difficult or impossible to achieve using CM approaches. The most promising of these designs were then created using AM and their performance evaluated for comparison. The overall goal of this is to ascertain whether the design and fabrication flexibility offered by AM can facilitate low profile heat sink designs that can meet or exceed the performance of conventional heat sinks even with perceived deficiencies in material properties for AM parts.

Experiments were carried out within Novec 7100 dielectric fluid for single-phase natural convection scenarios as well as two-phase subcooled boiling conditions at atmospheric pressure. A custom test rig was constructed consisting of mirror-polished stainless steel plates and polycarbonate viewing ports to allow visual access. A rotating sample stage allows for data to be obtained at varying heat sink orientation angles from 0° to 90°. For two-phase experiments, multi-angle video capture allows for analysis of the two-phase dynamics occurring at the heat sink samples to be visualized and temporally linked to the associated temperature and heat flux data.

Commentary by Dr. Valentin Fuster
2016;():V014T07A003. doi:10.1115/IMECE2016-66652.

Professional musicians today often invest in obtaining antique or vintage instruments. These pieces can be used as collector items or more practically, as performance instruments to give a unique sound of a past music era. Unfortunately, these relics are rare, fragile, and particularly expensive to obtain for a modern day musician. The opportunity to reproduce the sound of an antique instrument through the use of additive manufacturing (3D printing) can make this desired product significantly more affordable. 3D printing allows for duplication of unique parts in a low cost and environmentally friendly method, due to its minimal material waste. Additionally, it allows complex geometries to be created without the limitations of other manufacturing techniques. This study focuses on the primary differences, particularly sound quality and comfort, between saxophone mouthpieces that have been 3D printed and those produced by more traditional methods. Saxophone mouthpieces are commonly derived from a milled blank of either hard rubber, ebonite or brass. Although 3D printers can produce a design with the same or similar materials, they are typically created in a layered pattern. This can potentially affect the porosity and surface of a mouthpiece, ultimately affecting player comfort and sound quality. To evaluate this, acoustic tests will be performed. This will involve both traditionally manufactured mouthpieces and 3D prints of the same geometry created from x-ray scans obtained using a ZEISS Xradia Versa 510. The scans are two dimensional images which go through processes of reconstruction and segmentation, which is the process of assigning material to voxels. The result is a point cloud model, which can be used for 3D printing. High quality audio recordings of each mouthpiece will be obtained and a sound analysis will be performed. The focus of this analysis is to determine what qualities of the sound are changed by the manufacturing method and how true the sound of a 3D printed mouthpiece is to its milled counterpart. Additive manufacturing can lead to more inconsistent products of the original design due to the accuracy, repeatability and resolution of the printer, as well as the layer thickness. In order for additive manufacturing to be a common practice of mouthpiece manufacturing, the printer quality must be tested for its precision to an original model. The quality of a 3D print can also have effects on the comfort of the player. Lower quality 3D prints have an inherent roughness which can cause discomfort and difficulty for the musician. This research will determine the effects of manufacturing method on the sound quality and overall comfort of a mouthpiece. In addition, we will evaluate the validity of additive manufacturing as a method of producing mouthpieces.

Commentary by Dr. Valentin Fuster
2016;():V014T07A004. doi:10.1115/IMECE2016-67641.

3D printing, or additive manufacturing, is a key technology for future manufacturing systems. However, 3D printing systems have unique vulnerabilities presented by the ability to affect the infill without affecting the exterior.

In order to detect malicious infill defects in 3D printing process, this paper proposes the following: 1) investigate malicious defects in the 3D printing process, 2) extract features based on simulated 3D printing process images, and 3) an experiment of image classification with one group of non-defect infill image and the other group of defect infill training image from 3D printing process. The images are captured layer by layer from the top view of software simulation preview.

The data extracted from images is input to two machine learning algorithms, Naive Bayes Classifier and J48 Decision Trees. The result shows Naive Bayes Classifier has an accuracy of 85.26% and J48 Decision Trees has an accuracy of 95.51% for classification.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Emerging Manufacturing Techniques

2016;():V014T07A005. doi:10.1115/IMECE2016-65936.

In this study a gasoline powered hexa-copter unmanned aerial vehicle (UAV) has been designed as a solution to farmers’ need for a low cost, easy to maintain, long flight duration, and multi-purpose means of specific aerial applications for insecticides and herbicides.

Application of herbicides and pesticides by airplane is an example of how farmers have used technology to improve their bottom line and overall quality of life. Fields can now be sprayed in under an hour instead of consuming an entire day. However, if a producer has noxious weeds in only a small area, fixed-wing aerial application cannot be used as it is only accurate enough to do an entire field. Currently there is no solution for small scale, accurate, aerial herbicide application to meet this need. The currently available Yamaha Rmax UAV costs a tremendous amount of money and also requires a lot of money to maintain. Though it may be useful in large scale aerial spraying on the farm land, it would not be used in targeted specific areas as it is not efficient in specific applications.

The gasoline powered hexacopter UAV designed in this study is a low cost solution to farmers’ need for specific aerial applications of insecticides and herbicides. The UAV design can carry 2–3 gallons of herbicide (16.7–25.0 lbs.) for a flight time of more than 30 minutes without refueling. The design could be transported in a 60.3in × 56.7in pickup bed.

Structural and fatigue analyses are performed on the complete structure using state of the art software SolidWorks Simulation. The minimum factor of safety is obtained to be 10 based on maximum von Mises stress failure criteria. Under normal conditions with an estimated commercial use of 100 cycles per day it is observed that the design would survive for about 13 years without any fatigue failure. A drop test analysis is performed to ensure the design can survive a 5 feet freefall and a frequency analysis is also performed to observe the critical natural frequency of the structure. Flow simulations are performed on the 6 propellers/blades model using state of the art software SolidWorks Flow Simulation to observe the effect of vorticity interactions on the lift force. The design has been reasonably optimized based on maximizing the lift force.

With this new UAV design small scale and substantial farmers could afford a personal UAV for aerial applications with a small amount of capital whose absence hindered efficient and effective specific aerial application for many years.

Commentary by Dr. Valentin Fuster
2016;():V014T07A006. doi:10.1115/IMECE2016-66285.

Lightweight materials, manufacturing technology and the car body structure optimization are the three main approaches to achieve the lightweight constructions. The lightweight materials, such as aluminum or magnesium alloys, are widely utilized in the automotive industry for the weight reduction. Mechanical clinching is used to connect the lightweight materials. In this study, the sheets were joined by the extensible die clinching and flat-clinching. The tensile strength and shear strength of the aluminum alloy 5052 were investigated by the two different tools. Compared with the extensible die clinching, both the tensile strength and shear strength of the clinched joint produced with flat-clinching is higher. And the tensile strength of the clinched joint is up to 54% higher than that of the extensible die clinching.

Commentary by Dr. Valentin Fuster
2016;():V014T07A007. doi:10.1115/IMECE2016-66905.

The primary goal of this research was to evaluate the effectiveness of a low-cost reverse engineering system to recreate a physical, three-dimensional model of a human hand. In order to achieve the goal of this research, three key objectives were fulfilled: (1) the first objective was to recreate the physical model of the human hand using a low-cost experimental setup (<$5000), (2) the second objective was to assess the ability of the reverse engineered hand to perform common tasks of everyday life, and (3) the third objective was to investigate the potential biomedical applications of the reverse engineered human hand. A chosen test subject had his or her hand molded and cast into a plaster three-dimensional model that could be held steady and scanned very precisely by a NextEngine Desktop 3D Scanner. Other methods could have been employed to achieve the scanned model, but given the experimental setup and timeline a casted model was assumed to be the most appropriate method to achieve the best results. The plaster casting of the subject’s hand was scanned several times using different orientations of the model relative to the stationary 3D scanner. From these scans, a computer CAD model of the human hand was generated, modified, and 3D printed using a Makerbot Replicator 2. The printed model was evaluated by its ability to perform common every-day tasks such as picking up a cup/bottle, holding a pen/pencil, or opening/closing around an object. Several iterations of the printed human hand were evaluated in order to determine the best design for the fingers’ joints and cable-driven motion system. The first iteration of the printed hand featured a snap-in joint system. This joint design suffered from requiring a large number of individual pieces and poor tolerances of the Makerbot printer. The second iteration featured a press fit style joint system. This system was hindered by tolerances similar to the first iteration as well as plastic deformation of the printed material due to inadequate elasticity. The third and final iteration of the joint system featured a single printed assembly for which the entire prosthetic could be printed at one time. It was expected that the hand would be able to translate the rotational movement of an individual’s wrist to tension the cables of the motion system thereby closing the fingers into a first. This movement will allow the user to close the prosthetic hand around everyday objects and pick them up with relative ease. Although the possibilities of reverse engineering and 3D printing systems have greatly expanded as a result of greater affordability and increased accuracy, their applications in the biomedical field have yet to be fully explored.

Commentary by Dr. Valentin Fuster
2016;():V014T07A008. doi:10.1115/IMECE2016-67881.

The objective of this study is to find a structural alternative to jellyroll in order to safely conduct experimental crash testing of lithium-ion battery packs in academic laboratory environment. A procedure for lateral impact experiments has been developed and conducted on cylindrical cells and phantom cells using a flat rigid drop cart in a custom-built impact test apparatus. The main component of a cylindrical cell, jellyroll, is a layered spiral structure which consists of thin layers of electrodes and separator material. We investigate various phantom materials — candidates to replace the layered jellyroll with a homogeneous anisotropic material. During our experimentation with various phantom cells, material properties and internal geometries of additively manufactured components such as in-fill pattern, density and voids were adjusted in order to develop accurate deformation response. The deformation of the phantom cell was characterized and compared after impact testing with the actual lithium-ion cells. The experimental results were also compared with explicit simulations (LS-DYNA). This work shows progress toward an accurate and safe experimental procedure for structural impact testing on the entire battery pack consisting of thousands of volatile cells. Understanding battery and battery pack structural response can influence design and improve safety of electric vehicles.

Commentary by Dr. Valentin Fuster
2016;():V014T07A009. doi:10.1115/IMECE2016-67905.

The goal of this work is to enhance understanding of critical design aspects that would prevent automotive lithium-ion battery packs from catastrophic failures. Modeling lithium-ion batteries is a complex multiscale multi-physics problem. The most dangerous energy producing component of a lithium ion cylindrical cell, jellyroll, is a layered spiral structure, which consists of thin layers of electrodes and separator only microns thick. In this study, we investigate the feasibility of using commercial explicit finite element code LS-DYNA to understand the structural integrity of lithium-ion batteries subjected to crushing condition through computer simulation. The jellyroll was treated as homogeneous material with an effective stress-strain curve obtained through characterization experiments of representative jellyroll samples and individual electrode layers. Physical and numerical impact tests have been conducted on cylindrical cells using developed drop test system. Results of material homogenization, experimental drop testing, and initial structural simulations are discussed. The investigation of structural cell deformations coupled with thermal heat generation and distribution after the crash brings us one step closer to accurate modeling of the entire battery pack that consists of hundreds of cells.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Emerging Technologies in Composite Material Processing Techniques and Applications

2016;():V014T07A010. doi:10.1115/IMECE2016-66722.

Four kinds of commercial glass fiber reinforced thermoplastics including GF reinforced polypropylene (GF/PP), GF reinforced polyamide 6 (GF/PA6), GF reinforced polycarbonate (GF/PC) and GF reinforced polyoxymethylene (GF/POM) were used in this study. The contents of GF were 10%, 20% and 40%. GF/PP, GF/PA6, PF/PC and GF/POM composites were fabricated to dumbbell specimen by injection molding. The effect of glass fiber contents on tensile properties, morphology and dynamic mechanical properties was investigated. Fiber volume fraction, fiber length distribution and fiber orientation were observed by burning off polymer resin and dispersed remaining fibers on a glass slide. The Kelly-Tyson model is used for prediction strength and calculated the critical fiber length of each GF reinforced polymer composites. Tensile strength of all composites increased with increasing glass fiber contents. The fiber orientation factor and the fiber length distribution were increased at higher contents of glass fibers. The interfacial shear strength values of GF/PP, GF/PC and GF/POM composites increased with increasing fiber contents while the declination of the interfacial shear strength was found in GF/PA6 composites. It is interesting to report that the interfacial shear strength of GFRP composites was calulated according to the modified rule of mixture (MROM) and the Kelly-Tyson model, which the interfacial shear strength of the composites increased with incresing glass fiber contents.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Energy

2016;():V014T07A011. doi:10.1115/IMECE2016-65216.

Circulating fluidized bed (CFB) boiler technology now-a-days is becoming dominant in the coal fired power generation across the globe; It is now biggest competitor to PC fired coal technology reason being fuel flexible, environment friendly, less water consumption along with cost benefits. Major Challenges such as efficiency improvement and adaptability to advance technologies such as SC, USC, and A-USC is becoming need of the future. World power generation industry going through tough challenge of abating carbon emission and greenhouse gases at the same time providing cheap, reliable and quality electricity. To cater this CFB being the popular choice, efficiency improvement in CFB has become mandatory. This paper aims to provide limelight on optimizing the efficiency aspects of CFB boiler, various fuels and fuel-mixes available through extensive study material. Paper will also address the boiler efficiency aspects as per ASME PTC 4.0. At the end of the discussion, paper concludes with recommendations for efficiency improvement in CFB boiler aims to be useful for researcher, clients, developer, and investors.

Commentary by Dr. Valentin Fuster
2016;():V014T07A012. doi:10.1115/IMECE2016-66290.

The global trend to power industrial operations using local fuel sources continues, rather than relying so heavily on electrical power from large, centralized generating plants. Using a local fuel source has come to be termed “distributed generation”, implying that electrical power is generated locally and then used to power the relevant on-site industrial processes. This paper presents an alternative to distributed generation, where prime movers, powered by a local fuel source, are coupled directly to the equipment of the industrial processes using Infinitely Variable Transmissions (IVTs). The argument is advanced that direct drive, where the load is managed by an IVT, has the potential to be more efficient than many current distributed generation systems. The characteristics of an IVT designed by VeriTran Inc. are given, and the effort to demonstrate the advantages of IVT controlled direct drive in the petroleum production industry is discussed.

Commentary by Dr. Valentin Fuster
2016;():V014T07A013. doi:10.1115/IMECE2016-66547.

One of the biggest challenges for engines used in Marine industry is to burn fuels of varied compositions, since the vessels often move from regions with highly regulated fuels to regions with no regulations, unlike their on-road and other stationary counterparts. This poses an enormous risk to the performance, reliability, durability and service life of engines that employ exhaust gas recirculation (EGR) as a prime technology to meet stringent emission regulations, laid out by various regulating bodies across the globe like the United States (U.S.) Environmental Protection Agency (EPA) and International Maritime Organization (IMO).

Operating on fuels with higher Sulfur content poses a risk of reduced engine component life, due to the formation of concentrated Sulfuric acid (H2SO4), which, if not handled carefully, would lead to higher rates of corrosion on engine parts. Hence, the ability to predict the potential for H2SO4 formation as well its quantity to be handled is essential.

This research paper focuses on the development of an empirical model to predict the amount of H2SO4 condensate that can be formed in the air handling system of medium speed diesel engines. The model utilizes a combination of fundamental physics, chemistry, thermodynamics and chemical kinetics. The H2SO4 prediction calculation employs basic measurable parameters from a running engine, such as engine speed, load, EGR flow rate, fuel flow rate, fuel Sulfur concentration to compute a molar balance of hydrocarbon fuel and combustion air quantities along the entire range of engine operation, providing the amount of H2SO4 condensate formed. This is done primarily at EGR cooler, where the recycled exhaust gas gets cooled primarily and the EGR mixer, where it gets cooled further after coming in contact with the charge air and are identified as critical locations.

Commentary by Dr. Valentin Fuster
2016;():V014T07A014. doi:10.1115/IMECE2016-67104.

Numerous studies in the field of power generation deal with efficiency and flexibility enhancement of power plants. Supercritical carbon dioxide (sCO2) power cycles promise significantly higher efficiencies and very compact constructions compared to conventional Rankine cycles. An opportunity to increase the flexibility of such power cycles, is the integration of Thermal Energy Storage (TES) systems into the process. In this work the sandTES technology, a particle based TES system is introduced, which can be used to improve the load change characteristics of power plants even at highest temperatures. After introducing the main concept and the key technologies of the sandTES technology, a utility scale heat exchanger for implementation in a high temperature sCO2 power cycle is presented and discussed. Finally crucial design parameters of the presented heat exchanger (HEX) are outlined as well as their influences on the HEX dimensions are discussed.

Commentary by Dr. Valentin Fuster
2016;():V014T07A015. doi:10.1115/IMECE2016-67974.

In an effort to create a Light Emitting Diode (LED) lighting system that is as efficient as possible, the heat dissipation system must be accurately measured for proper design and operation. Because LED lighting technology is new, little optimization has been performed on typical cooling system required for most A19 replacement products. This paper describes the research process for evaluating the thermal performance of over 15 LED lighting products and compares their performance to traditional lighting sources, namely incandescent and compact fluorescent (CFL). This process uses radiation and convection to model typical cooling mechanisms for domestic A19 type replacement LED products.

The A19 products selected for this investigation had input wattages ranging between 7 to 60 Watts, with outputs ranging from 450 to 1100 lumens. The average LED tested dissipated 43% (± 5%) of the total heat generated in the lighting product through the heat exchanger. The best thermal performance was observed in an LED product that dissipated approximately 58% of the total product heat through the heat exchanger. Results indicate that significant improvements to the current LED heat exchanger designs are possible, which will help lower the cost of future LED products, improve performance, and reduce the environmental footprint of the products.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Engineering Research in Healthcare

2016;():V014T07A016. doi:10.1115/IMECE2016-65445.

The design process is examined for retrofitting a robotic arm exoskeleton with a three-axis wrist for enhanced teleoperation. Exoskeleton wrist design is particularly challenging due to the need to incorporate three actuated joints into a compact volume, while maintaining a large range of motion. The design process was greatly facilitated by the development of a new visualization method which enabled the designer to examine the interactions between the exoskeleton and its operator in the same virtual workspace. This allowed the designer to evaluate the exoskeleton’s range of motion and ergonomic properties, while also adding task visualization functionality. Future applications of the exoskeleton in telepresence will also be discussed.

Commentary by Dr. Valentin Fuster
2016;():V014T07A017. doi:10.1115/IMECE2016-65922.

Interventional cardiologists and neurosurgeons are exposed to X-ray radiations while performing surgery. A few tele-robotic systems currently in the market require tedious tele-control using an external input device. In this paper we propose to automate such interventional surgeries using a robotic system using image guided intervention. After image processing of x-ray fluoroscope image and applying new machine learning techniques based on Markov Decision process, our new robotic system is capable of reaching aortic arch, which is the first step in cardiac and neuro-interventional procedures. Further, we explain the algorithm and demonstrate the proposed system implementation on an endovascular simulator.

Topics: Robotics , Surgery
Commentary by Dr. Valentin Fuster
2016;():V014T07A018. doi:10.1115/IMECE2016-67066.

Most of the current problems can be solved by referring to the solutions of the previous problems. Case Based reasoning (CBR) is one of the methods that solves a problem by retrieving the similar problems from the past and adapting the solutions of the past problems to solve the new problem. Recent studies that apply CBR include time as a parameter to retrieve most effective solutions that vary with time. This approach is more helpful in healthcare area in which one needs to look at historical evidence to find an accurate diagnostic or treatment regime. Hence, in this study, a time-based CBR is applied to track the outcomes of the drug therapy on hypertensive patients and find the most effective drug as a prescription. Initially, episodes in each patient’s medical records are chronologically ordered such that the oldest episode is placed first in the episode sequence and the latest episode is placed the last. It is assumed that the first episode of each patient is the first instance of diagnose; so when a new patient comes for checkup, his/her state (health condition) is compared with the initial state of the past patients. Therefore, the retrieval process calculates the similarity between the new patient’s current state and the most similar patients at their first episodes in the patient records. Due to the diversity of therapies for matching patients, the best treatment couldn’t be determined without knowing the efficacy of the different treatments. Therefore, the subsequent episodes of matching patients are examined to find the best treatment for the new patient. This might even require using a combination of treatments from all matching patients to find a good treatment for the new patient.

After the treatment is defined for the first visit, the record of the new patient is stored in the library for future case retrieval. This method is a novel approach to personalized treatment of patients having chronic disease by tracking the medical records past patients over a long period of time.

The current approach for treating the hypertensive patients uses evidence-based guidelines for managing the disease. However, this approach is more general and doesn’t take into account all the patient characteristics such as lab results and physical examination parameters. In the current approach the similarity between patients can’t be leveraged; the change of the treatment regime is based only on the risk parameter. However, in this method several parameters are being checked for efficiency of the medication. In contrast, the proposed CBR-based method personalizes the treatment based on what worked well for similar patients.

In this paper, the clinical records of hypertensive patients are provided by a Boston based hospital. The preliminary results confirm that the proposed approach will give good recommendation for hypertension treatment.

Commentary by Dr. Valentin Fuster
2016;():V014T07A019. doi:10.1115/IMECE2016-67298.

Recently reported fishing line muscles are soft actuators which can be fabricated by twist insertion in commercially available Nylon 6 monofilament fibers under certain amount of tension. Annealing and Training are needed to retain the twist inserted to complete the fabrication process. These actuators are soft polymeric materials with high stresses, large strain, relatively high power to weight ratio when compared to conventional actuators apart from being cost effective. Though the performance of the muscles is largely dependent on parameters of fabrication, these actuators deform linearly in response to thermal gradient. Actuation can be triggered by varying temperature by any means such as blowing hot fluid or resistive (Joule) heating. The response of the muscles depends on the rate of change of temperature, magnitude of temperature, and applied load. We recognized the potential application of the muscle as a mechanical thermostat as a new design or for use in opening and closing a control valves. Mostly the working range for this muscle is 50–150°C, which is the working range of a wide variety of devices and instruments. This study presents a novel design, fabrication, working principle and preliminary experiments of the thermostat device that is light in weight, simple to manufacture and cost effective.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Innovative Products

2016;():V014T07A020. doi:10.1115/IMECE2016-65253.

A fixed low-ratio Epicyclic drive is developed that resembles a Planetary drive but has crank-shaft pinions and an additional carrier replacing its ring gear. It provides half the reduction ratio of a Planetary drive with similar pinions, making reduction ratios at or near 2:1 practical. It shares many properties with a Planetary drive such as torque splitting and co-axial drive shafts that spin in a common direction but with no ring gear, reverse bending, or assembly criteria. It has many optional configurations and modes, does not slip or jam, is easily back-driven, has low pitch and bearing velocities, and favorable churning properties. It is a viable option for both reduction and overdrive applications that promises low noise / vibration / harshness (NVH) levels.

Commentary by Dr. Valentin Fuster
2016;():V014T07A021. doi:10.1115/IMECE2016-65284.

In an associated work [9], a low-ratio Epicyclic drive is developed that resembles a Planetary drive but has crank-shaft pinions, an additional carrier replacing its ring gear, and half the reduction ratio of a similarly sized Planetary drive. Adding couplings that reverse the sun and planet engagement, further reduces the ratio below unity. Interchanging the roles of the input and output shafts results in high reduction ratios similar to a Cycloid drive. Like a Cycloid drive, it has co-axial drive shafts and a reduction ratio that depends on the difference between gear pitch diameters. Unlike a Cycloid drive, the drive shafts spin in a common direction, torque is split between multiple co-planar transmission paths, and no ring gears are required. It has many optional configurations, none of which slip or jam, and is a viable option for speed reduction applications that require low cost and complexity in a compact package.

Commentary by Dr. Valentin Fuster
2016;():V014T07A022. doi:10.1115/IMECE2016-65955.

There are two basic types of transmissions — manual and automatic. While manual transmissions have greater transmission efficiency and better overall driving experience, it is difficult and not as easy to handle as automatic transmissions mainly due to the presence of the clutch pedal. Automatic transmissions make life are a lot easier to operate but are less efficient. So, the aim of this project is to take the best of both worlds and combine it in one i.e. the experience of driving a manual with the ease of an automatic. For this, the intention is to use a pressure sensor to detect when the gears have to be changed and automatically disengage the clutch, hence removing the need for the clutch pedal.

Commentary by Dr. Valentin Fuster
2016;():V014T07A023. doi:10.1115/IMECE2016-67790.

The majority of bicycle derailleur systems in use today are based on the same technology used over a century ago. This technology has made large strides in improvement over that time period, yet limitations remain. The derailleur is expensive, prone to damage, requires regular maintenance, and is a significant portion of a bike’s overall weight. Continuously Variable Transmission (CVT) technology could resolve many, if not all, of those limitations. CVT systems are commonly used in small motorized vehicles, such as go-karts or snowmobiles, and is becoming increasingly popular in the automotive industry. This research explores the application of CVT technology to a bicycle. Proto-typing and testing was conducted to understand the functionality, reliability, and feasibility of the design.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Internet of Things

2016;():V014T07A024. doi:10.1115/IMECE2016-67472.

Recently, the Internet of things (IoT) has emerged as a promising solution for several industrial applications. One of the key components in IoT is passive radio frequency identification (RFID) tags which do not require a power source for operations. Specifically, ultra-high frequency (UHF) tags are studied in this paper. However, due to factors such as tag-to-tag interference and inaccurate localization, RFID tags that are closely spaced together are difficult to detect and program accurately with unique identifiers. This paper investigates several factors that affect the ability to encode a specific tag with unique information in the presence of other tags, such as reader power level, tag-to-antenna distance, tag-to-tag distance and tag orientation. ANOVA results report reader power level and tag spacing, along with effect interactions power level*tag space and tag space*tag orientation to be significant at the levels investigated. Results further suggest a preliminary minimum tag-to-tag spacing which enables the maximum number of tagged items to be uniquely encoded without interference. This finding can significantly speed up the process of field programming in item-level tagging.

Commentary by Dr. Valentin Fuster
2016;():V014T07A025. doi:10.1115/IMECE2016-67962.

A framework is developed to integrate the existing MFiX (Multiphase Flow with Interphase eXchanges) flow solver with state-of-the-art linear equation solver packages in Trilinos. The integrated solver is tested on various flow problems. The performance of the solver is evaluated on fluidized bed problems and observed that the integrated flow solver performs better compared to the native solver.

Commentary by Dr. Valentin Fuster
2016;():V014T07A026. doi:10.1115/IMECE2016-68069.

Urban and national road networks in many countries are severely congested, resulting in increased travel times, unexpected delays, greater travel costs, worsening air pollution and noise levels, and a greater number of traffic accidents. Expanding traffic network capacities by building more roads is both extremely costly and harmful to the environment. By far the best way to accommodate growing travel demand is to make more efficient use of existing networks. Portugal has a good but underused toll highway network that runs near to an urban/national road network that is free to use but congested. In choosing not to pay a toll, many Portuguese drivers are apparently accepting greater risk to their safety and longer travel times. As a result, the urban/national road network is used far more intensively than projections anticipated, which raises maintenance costs while increasing levels of risk and inconvenience. The main idea behind the work presented here, is to motivate a shift of traffic from the overused network to the underused network. To this end, a model for calculating variable toll fees needs to be developed. In order to support the model, there is the need to accurately predict the status of road networks for real-time, short and medium term horizons, by using machine learning algorithms. Such algorithms will be used to feed the dynamic toll pricing model, reflecting the present and future traffic situations on the network. Since traffic data quantity and quality are crucial to the prediction accuracy of road networks’ statuses, the real-time and predictive analytics methods will use a panoply of data sources. The approach presented here, is being developed under the scope of the H2020 OPTIMUM, a European R&D project on ITS.

Commentary by Dr. Valentin Fuster

Emerging Technologies: Mechatronics and Automation

2016;():V014T07A027. doi:10.1115/IMECE2016-65404.

With the emergence of augmented and virtual-reality based information delivery technologies the gap between availability of communication devices for visually impaired people and sighted people is emerging. The current study describes a communication tool which provides a reading platform for visually impaired people by means of a haptic display. In this paper the development and human subject study based evaluation of an electromagnetic microactuator-array based virtual tactile display is presented. The actuator array is comprised of a 4 by 5 array of micro voice-coil actuators (tactors) providing vibrotactile stimulation on the user’s fingertip.

The size and performance of the actuators is evaluated against the thresholds of human tactile perception. It is demonstrated that a 2.65 mm (diameter) × 4 mm (height) generic tactor is suitable for practical applications in dynamic tactile displays. The maximum force of the actuator was 30 mN generated at current levels of 200 mA. At a stroke of 4.5 mm, the force is reduced to 10 mN. The peak force was generated at a displacement of 1.5 mm.

A total of 10 alpha-numeric symbols were displayed to the users via dynamically changing the location of the vibrating point in a predefined sequence, thus creating a tactile perception of continuous curve. Users were asked to sketch out the perceived symbols. Each subject carried out three experiments. The first experiment exposed all subjects to ten different characters. Data obtained from human subject tests suggest that users perceive most shapes accurately, however the existence of jump discontinuities in the flow of presentation of the curves lowers recognition efficiency most likely due to loss of sensation of solid reference point. Characters containing two or more discontinuous lines such as ‘X’ were more difficult to recognize in comparison to those described with a single line such as ‘P’, or ‘Z’. Analysis of the average character recognition rate from 10 volunteers concluded that any presented character was identified correctly in 7 out 10 tests. The second test included characters that were reused from the first experiment. Users had improved their character recognition performance as a consequence of repeated exposure and learning. A final set of experiments concluded that recognition of groups of characters, forming words, is the least efficient and requires further perfecting. Recommendations for improvements of the recognition rate are also included.

Topics: Haptics
Commentary by Dr. Valentin Fuster
2016;():V014T07A028. doi:10.1115/IMECE2016-66051.

In this paper, Markov chain Monte Carlo(MCMC) inversion method based on the Bayesian inference is used to invert parameters of leak source in two-dimensional space. Sensors are divided into three groups with different arrangements: linear-array perpendicular to the wind direction, linear-array parallel to the wind direction and cross-array. Then the source probability distributions of different arrangements and quantities on the accuracy and efficiency were analyzed and compared. It is shown that in one direction, more measurement information from different sensors result in more accurate inversion parameters. In the case with the same quantity of sensors, inversion parameters considering information of two directions are more accurate than which only considering one direction. It means that the combination of information in two directions can improve the inversion accuracy. The ventilation will enlarge the possible convergence region and increase the instability of inversion results in wind direction because of its migration and dilution effect. The inversion time consumed presents a positive relationship with the quantity of sensors. However, too much sensors may lead to the growth of consumption time, which are not conducive to practical application.

Topics: Sensors
Commentary by Dr. Valentin Fuster
2016;():V014T07A029. doi:10.1115/IMECE2016-68127.

For the vehicle to move forward the engine has to be connected to driving wheels so as to propel the vehicle. The engine rotates at relatively high speeds, while the wheels are required to turn at slower speeds. The torque requirements of the vehicle vary as per the prevailing conditions of load, terrain etc. Gear box provides different gear ratios between the engine and the driving wheels, to suit the varying road conditions such as when climbing hills, traversing rough road or sandy road or pulling a load. The required gear shift for providing varying torque requirements can be obtained either manually or automatically. Automatic gear shifting mechanism is a concept implementing an embedded control system for actuating the gears automatically without human intervention. The automation is achieved by using a microcontroller and suitable sensor and actuator hardware. Whenever the speed of the vehicle increases or decreases beyond a pre-defined set of values, the microcontroller based control system actuates the clutch as well as the gear and helps maintain a steady operation of the automobile. The concept of automatic gear change is applied in this work to a 4-stroke, manual transmission motorcycle. The clutch is actuated by means of a DC Motor actuated mechanism and gear lever is actuated by means of the spring loaded solenoid actuator, both controlled by a microcontroller based circuit, programmed to read the signals from an inductive proximity sensor which senses the actual speed of the wheel. The system design and development is described in this paper with control circuit and control logic.

Commentary by Dr. Valentin Fuster
2016;():V014T07A030. doi:10.1115/IMECE2016-68163.

Pipelines leak detection represents an essential aspect in pipeline rehabilitation to avoid any unexpected failure. Several detection techniques have been adopted and received a widespread application in pipeline inspection nowadays but still present a major challenge to field operators. This paper presents an attempt to develop correlations between leaks and their effect on the fluid characteristics inside the pipeline such as fluid velocity, variation of pressure and sound level due to the presence of leaks along the pipeline. Characterization of these parameters and how they propagate with respect to time from the leak source will allow the development of a solution to detect leaks and quantify the amount of fluid being lost. This paper aims at conducting an experimental investigation to determine the sound level for specified leak sizes. The experimental data was used in COMSOL Multiphysics to simulate various fluid flow scenarios inside a 2 in. (5.08 cm.) pipe with different leak sizes.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Bioinspired Materials and Structures

2016;():V014T11A001. doi:10.1115/IMECE2016-65202.

Biomimicked composites have shown to be effective in reducing risk factors associated with the resistance of buildings against earthquakes. A significant amount of work is needed to determine what factors are critical in toughening composites used for homes, office buildings and other human dwellings. The effect of carbon fiber length on the mechanical properties of biomimicked composites was investigated. Composites made of cement, polymer and carbon fiber were fabricated in layers similar to the layering scheme of nacre and tested for toughness to determine the effects of fiber length. Preliminary results show an increase in the the strength of the composites with the fiber length. However, the effect is more pronounced on the increase in the fracture energy showing a linear relationship with the carbon fiber length.

Commentary by Dr. Valentin Fuster
2016;():V014T11A002. doi:10.1115/IMECE2016-66778.

Motorcycle helmets are vital to protect from recurrent road accidents as they prove crucial in reducing brain trauma. This research piece presents a new and plausible bio-inspired design affined to the foam liner material and structure in helmets. The proposed liner design is inspired from animal horn micro-structure and tubule arrangement. An innovative drop-testing apparatus is presented with a spring-ratchet mechanism for experimental testing. The aim is to validate the new design by meeting the ECE 22.05 standard for motorbike helmets using peak linear acceleration and HIC criteria. Experimental results are partly verified against FEA simulations for two proposed samples. Further samples call for more complex simulations at a later stage to best describe material properties and structures.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Fracture and Damage: Nano to Macro Scale

2016;():V014T11A003. doi:10.1115/IMECE2016-66343.

The thermo-mechanical response of carbon fiber reinforced polymer (CFRP) laminates subjected to continuous tensile loading and programmed interrupted tensile loading is examined to understand the changes due to damage progression. Quasi-isotropic laminates were prepared using 500 GSM twill weave carbon fabric with LY 556 resin and HY 991 hardener by hand lay-up technique, followed by curing under hot compression. A few specimens were subjected to an impact loading to 23 J and 51 J energy levels using a hemispherical tip to induce low velocity impact damage. Passive thermal imaging of woven CFRP laminates during tensile testing was captured using a TIM 160 Micro-epsilon infrared thermal camera. Temperature response during tensile testing provided a good correlation with deformation mode esp. for specimens impacted with 51 J of energy.

Tensile tests were interrupted at periodic loads and unloaded and reloaded to study the thermal response after prior plastic deformation damage in the specimen. Unlike the case of GFRP specimens, distinct changes in thermo-elastic slope due to prior plastic deformation damage could not be clearly identified. As impact damage resulted in de-lamination of some layers, active thermography technique was used to study the rate of cooling of specimen with time when the damage is closer to the camera face as well as when it is away from the camera face. The cooling curves obtained were found to be dependent on the location of the damage, as well as on heating face of the specimen.

Commentary by Dr. Valentin Fuster
2016;():V014T11A004. doi:10.1115/IMECE2016-66539.

Bending fatigue test of vehicle wheel is the main test to verify the mechanics performance of spoke. The wheel is fastened to the bending fatigue test platform with bolts in the bending fatigue test. A cyclic bending moment is applied to the wheel, and after some number of cycles, fatigue failure will happen. In this paper, the bending fatigue test is carried out on a steel wheel and a wheel made of long glass fiber reinforced thermoplastic (LGFT) wheel, and infrared imager is used to monitor the temperature distribution and variation of wheels under bending loads in the test process. After the test, it is found that there are cracks at the highest-temperature spots. In addition, because some cracks of LGFT wheel are too tiny to be found, it’s convenient to search those cracks according to the high-temperature areas in infrared images. All above indicate that it is practicable to predict fatigue failure area by monitoring temperature distribution and variation in wheel bending fatigue test. A method for real-time prediction of fatigue failure area in wheel bending fatigue test is described in this paper, which is also helpful to real-time prediction of fatigue failure area in fatigue tests of other products.

Commentary by Dr. Valentin Fuster
2016;():V014T11A005. doi:10.1115/IMECE2016-67452.

This paper presents a numerical study of different geometries of cruciform specimens for biaxial tensile tests. The aim of these specimens is to be used on fixtures for biaxial tests mounted in universal testing machines. For the study, a model of isotropic material for steel sheet metal specimens was considered. Thus, only the mechanical properties of the sheet metal in the rolling direction were considered in the simulations. In this numerical analysis, the normal stress distribution and the consequent shear stress were studied. Additionally, the effect of the inclusion of multiple slots as well as a thickness reduction on the normal and shear stresses were assessed. Hence, a specimen in which a uniform normal stress distribution with zero shear stress, is necessary. The results of the analysis show that a specimen with features, multiple slots and a thickness reduction in the central area, provides a better performance in the simulations than dismissing any of these characteristics. Finally, a specimen model suitable for the mentioned test is proposed according to the obtained numerical results and the feasibility of manufacture of the experimental sample-test.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Materials Processing and Characterization

2016;():V014T11A006. doi:10.1115/IMECE2016-65056.

Lubricant film-forming viscosity index improvers blended with commercial engine oil have been developed and studied by using optical interferometry. The influence of the viscosity index improvers (PTFE and MoS2) mixed with oil were experimentally studied and compared with engine oil without the index improvers as the baseline. The effect of the viscosity index improvers on lubricant film thickness, contact pressure and rolling speed for the case of a steel ball loaded on a flat glass surface in point contact condition was investigated. An optical interferometry technique which utilized a monochromatic two-beam interferometry light source, a microscope and a high-speed video recording device was used for the investigation. Hamrock and Dawson calculations for EHL film thickness were also used for comparative analysis. The lubricants used were commercial SAE #30 engine oil and PTFE and MoS2 mixed with commercial SAE #30 engine oil. The oil viscosities ranged from 0.0109 Pa.s to 0.255 Pa.s. The rolling speed and the loads were varied between 0.189 m/s to 0.641 m/s and 1 N to 2.6 N respectively. The lubricant film thickness stability at the point of contact between the steel ball and the glass disc was investigated for both steady and rolling state conditions. The viscosity index improvers were found to have a significant effect on the film thickness behavior under pure rolling point contact conditions.

Commentary by Dr. Valentin Fuster
2016;():V014T11A007. doi:10.1115/IMECE2016-65206.

In recent years use of electrospun nanofibers and nanoparticles to improve the interlaminar properties have increased significantly. In most of the cases the additional interlaminar phase of nanofibers is required to go through various thermal and/or chemical processes. There has been emphasis to optimize the interlaminar nanofiber layers to achieve the optimum desired mechanical properties such as interlaminar strength. One common practice is to disperse nanofibers into the resin and then use the nanofiber enhanced resin to fabricate the laminated composites. However, proper dispersion and fiber filtering out are some of the problems that exist in fabrication using the nanofiber mixed resin approach. To alleviate this problem, an innovative approach of growing PAN (polyacrylnitrile) nano fibers directly on carbon fabric by electrospinning seems to solve the dispersion and fiber filtering problem. However, as PAN fibers require stabilization and carbonization, it is obvious that carbon fabric with PAN fiber deposition will have to undergo stabilization and carbonization process. The effect of stabilization and carbonization heat treatment on the mechanical properties of carbon fiber fabric is not yet fully understood. This paper presents the effects of heat treatment on carbon fabric used for fabricating laminated carbon fiber reinforced composite with epoxy resin. The heat treatment was performed at 280°C in air for six hours, and 1200°C for one hour in nitrogen which are similar to stabilization and carbonization of pure PAN fibers. The effects, due to heat treatment, were mainly characterized in terms of mechanical properties by performing tensile tests and shear tests. Fiber surface topography was observed by SEM to analyze physical changes. Chemical changes, corresponding to the existing groups with carbon fibers, were examined through FTIR. The results obtained are compared with a set of control laminated composite specimens, which were fabricated using heat vacuum assisted resin transfer molding (HVARTM) process and cured at 149°C. The two sets of composite were infused with resin in a single vacuum bag to ensure that both sets of specimens have identical resin infusion and cure cycle. Laminates used for making control specimens were fabricated using carbon fabric which did not undergo any heat treatment. A change in laminate thickness for heat treated carbon fabric was observed indicating a possible bulk up of the carbon fibers due to loss of sizing compounds, which also resulted into significant change in tensile properties.

Commentary by Dr. Valentin Fuster
2016;():V014T11A008. doi:10.1115/IMECE2016-65354.

Boronizing is one of the thermochemical surface treatment processes which is extensively used to obtain excellent mechanical properties such as high strength, very high hardness, good toughness and fracture toughness. In this study, AISI 1050 steel specimens have been subjected to pack boronizing process by using Ekabor 2 powder within the stainless steel seal container. The experiments were carried out at temperatures of 800 °C, 850 °C and 900 °C for 3, 6 and 9 hours to investigate the effect of these parameters on the wear resistance of boronized specimens. Pin-on-Disk wear testing is used to characterize wear properties of boronized specimens. Wear tests were performed at dry conditions under constant load of 30 N by using 220 mesh size Al2O3 abrasive paper. Different rotating speeds of the pin-on disk were selected as 300, 600, 900, 1200, 1500 revolutions for each of the test specimens. After the abrasive tests, weight losses of the specimens were measured to determine the abrasive wear resistance of boronized specimens. The results were also compared with unboronized and conventional hardened AISI 1050 steel specimens respectively.

Commentary by Dr. Valentin Fuster
2016;():V014T11A009. doi:10.1115/IMECE2016-65484.

Thin walled structures are widely used as energy absorbing devices during accidents, collisions in various transportation systems. Designing an energy absorbing device requires proper combination of geometry and material. The deformation behavior and collapse modes of these structures are complex. Simple geometries with square, polygonal or circular cross section deform with various collapse modes for energy absorption in these structures. In the present work, square extruded Aluminum tubes are axially compressed under quasistatic loading. Infra-red thermal imaging is done to measure the rise in surface temperature during axial compression of the square tube. Post experimental investigations have been conducted using Scanning Electron Microscope (SEM) and Computerized Tomography (CT) scan to understand the deformation behavior at micro level. The out of plane displacement after progressive buckling is measured using a Coordinate Measuring Machine (CMM). Full field 3D Digital image correlation (DIC) technique has been used to measure the surface strain. The results indicate a good correlation between displacements measured by DIC technique and CMM. Strain field developed during progressive buckling suggests large strains at crumple zones. SEM investigations suggest material pile-up at severely compressed regions with thinning on tensile deformation edges.

Topics: Buckling
Commentary by Dr. Valentin Fuster
2016;():V014T11A010. doi:10.1115/IMECE2016-65514.

The properties of carbon nanotubes are dependent, in part, on the size of the catalyst metal nanoparticles from which the carbon nanotubes are grown. Annealing is a common technique for forming the catalyst nanoparticles from deposited films. While there is ample work connecting catalyst film properties or catalyst nanoparticle properties to carbon nanotube growth outcomes, the control of catalyst nanoparticle size by means other than the variation of initial film thickness is less explored. This work develops an empirical correlation for the control of nickel nanoparticle equivalent diameter by modification of anneal plateau temperature and anneal plateau time, thereby providing an additional avenue of control for catalyst properties. It has been hypothesized that the size of catalyst nanoparticles can be predetermined by appropriate selection of the initial catalyst film thickness, plateau temperature, and plateau time of the annealing process. To this end, buffer layers of 50 nm titanium, followed by 20 nm aluminum, were deposited onto silicon substrates via electron beam evaporation. Nickel catalyst layers were then deposited with thicknesses of either 5, 10, or 20 nm. Samples of each of the three nickel layer thicknesses were annealed in an ambient air environment at different combinations of 500, 600, 700, 800, and 900 °C plateau temperature and 5, 10, and 15 minute plateau time. Representative time-temperature curves corresponding to each plateau temperature were also acquired. The end result was a set of 45 samples, each with a unique combination of initial nickel film thickness, anneal plateau temperature, and anneal plateau time. Resulting nanoparticles were characterized by atomic force microscopy, and distributions of nanoparticle equivalent diameter were collected via a watershed algorithm implemented by the Gwyddion software package. Comparison of the 45 parameter combinations revealed a wide range of nanoparticle sizes. In most cases, comparable equivalent diameters were obtained from a variety of parameter combinations. Thus, results provide multiple options for achieving the same nanoparticle diameter, for use in cases where additional restraints are present. To facilitate such decisions, a correlation was developed that connected catalyst nanoparticle diameter to the three process parameters of initial catalyst film thickness, anneal plateau temperature, and anneal plateau time. For example, a given initial Ni film thickness can be annealed to a specified nanoparticle size by selecting anneal plateau temperature and plateau time per the correlation, provided that comparable buffer layers were chosen. This correlation provides a more robust array of options for specification of catalyst nanoparticle size and final carbon nanotube properties for a specific application.

Commentary by Dr. Valentin Fuster
2016;():V014T11A011. doi:10.1115/IMECE2016-65731.

Nowadays Carbon Nanomaterials (CNMs) are important in the applied nanoscience development, due to their extraordinary chemical and physical properties. The present research proposes a Taguchi methodology to obtain CNMs with high carbon concentration using hexane as carbon source, and stainless steel core as catalysis by Chemical Vapor Deposition (CVD). The Taguchi experimental design identified the optimal variable and level. Flow rate, temperature and time synthesis were studied. Scanning Electronic Microscopy (SEM) depicted different carbon morphologies. Energy Dispersive Spectroscopy (EDS) demonstrated a carbon atomic percentage concentration above 97. Temperature was the most significant variable according to Taguchi analysis.

Commentary by Dr. Valentin Fuster
2016;():V014T11A012. doi:10.1115/IMECE2016-65911.

Using molecular dynamics (MD) simulations, we explore the structural stability and mechanical integrity of phosphorene nanotubes (PNTs), where the intrinsic strain in the tubular PNT structure plays an important role. It is proposed that the atomic structure of larger-diameter armchair PNTs (armPNTs) can remain stable at higher temperature, but the high intrinsic strain in the hoop direction renders zigzag PNTs (zigPNTs) less favorable. The mechanical properties of PNTs, including the Young’s modulus and fracture strength, are sensitive to the diameter, showing a size dependence. A simple model is proposed to express the Young’s modulus as a function of the intrinsic axial strain which in turns depends on the diameter of PNTs. In addition, the compressive buckling of armPNTs is length-dependent, whose instability modes transit from column buckling to shell buckling are observed as the ratio of diameter/length increases.

Commentary by Dr. Valentin Fuster
2016;():V014T11A013. doi:10.1115/IMECE2016-66048.

The surface interactions and tribological behavior of titanium-steel contact have been previously studied under the application of several commercial Ionic Liquids (ILs). In certain cases, superior anti-wear characteristics have been experienced when lubricating using ILs. This is often attributed to the development of a protective tribolayer that forms during application. One anion in particular amide, [Tf2N], has exhibited these characteristics with particularly positive results. However, amide is an anion that contains halogens, which are toxic and can cause harm if not handled properly. Due to the toxicity of most lubricants there has been a growing need to transition to bio-lubricants due to their low impact to the environment. This particular work will investigate the use of Trihexyltetradecylphosphonium, [P6,6,6,14]-+, cation with anion decanoate [Deca] as a non-toxic alternative to amide [Tf2N]. [P6,6,6,14]-+[Deca] and [P6,6,6,14]-+[Tf2N] will be compared as additives (1.0 and 2.5 wt. %) in Coffee Bean oil (CB) for lubrication of titanium-steel contact at room temperature.

In this work, tests are conducted using a ball-on-flat reciprocating tribometer as per ASTM G133 with lubricated titanium-steel contact. An AISI 420C stainless steel ball is used on a Grade 5 6Al-4V titanium alloy disk specimen. Friction and wear volume are measured, examined, and discussed.

Topics: Temperature , Steel , Titanium
Commentary by Dr. Valentin Fuster
2016;():V014T11A014. doi:10.1115/IMECE2016-66073.

A supercharger is a mechanical device that can be added to an engine of a car to increase engine power. It works by sucking air in at atmospheric pressure into the rotors and compressing it at high revolutions per minute. With the rotors spinning at high speeds, the supercharger gears are exposed to high values of friction and wear, which results in a reduction of their service life.

Ionic liquids (ILs) are substances that possess unique lubricating abilities when added to base oil or when used as neat lubricants. Properties include low volatility, non-flammability, as well high thermal resistance. These liquids are able to form ordered layers and tribofilms on the contacting surfaces which further protects the surface materials. In this work, the effect of adding ILs to low viscosity synthetic oil used to lubricate gears and to organic oil was investigated in the reduction of friction and overall wear of superchargers. Mobil 1 5W-30 Full Synthetic Engine Oil (MS) was used as a control and compared to coffee bean oil (CB). Additionally, the performance of these oils was observed with ionic liquids as additives at 1 wt. %. The chosen IL consisted of the cation Trihexyltetradecylphosphonium, [P6,6,6,14]+, with the anion Bis(trifluoromethylsulfonyl) amide, [NTf2]. Lubricated flat disks of AISI 52100 stainless steel and 420C steel balls were studied using a Pin-on-Disk configuration. A total sliding distance of 500 meters was tested with a wear track diameter of 20 mm. Wear volume and average friction coefficient were measured according to ASTM-G99. Results showed that the addition of the ILs to the CB and MS reduced friction coefficient of the steel disks at medium speeds, and wear values achieved were comparable to the friction observed. The wear width values were also found to be reduced at medium speeds.

Commentary by Dr. Valentin Fuster
2016;():V014T11A015. doi:10.1115/IMECE2016-66161.

Aluminum 7075 alloy (AA 7075) is one of the prime materials used in the aviation and automotive industry because of its high strength to weight ratio, good amount of fatigue strength and high machinability. Friction stir processing (FSP) is one of the emerging solid state process that refines the microstructure and hence improved mechanical properties are obtained. The process temperature during FSP affects the resulting microstructure so the attempt for reducing the process temperature can result into reduction in the grain size. The fine grain size microstructure delivers high percentage of elongation which reduces the number of joints and welds in the critical structural applications. So, by implementing coolants such as water and carbon dioxide (CO2) during this process had hindered the grain growth and very fine grained microstructure was obtained. The fine grain microstructure offers higher elongation and hardness as deformation starts from the grain boundaries. In this experimental investigation we intended to keep the temperature generation during the process as low as possible by keeping the process parameters of 765 rpm, 31.5 mm/min fees rate and 20 tilt of the tool (optimized for tapered threaded cylindrical pin tool) constant. All the samples were examined by metallographic characterization using optical microscope. The grain size measurements for all three FSP samples were carried out. Water and CO2 cooled FSP samples reported much more fine grain as compared to naturally cooled sample because of the less heat input during the process.

Topics: Friction , Cooling , Aluminum
Commentary by Dr. Valentin Fuster
2016;():V014T11A016. doi:10.1115/IMECE2016-66317.

The newly developed mechanistic-empirical pavement design method uses the dynamic modulus as one of the crucial input parameters for the asphalt pavement to be designed or analyzed. This study proposes a new regression-based predictive model to estimate dynamic modulus of asphalt concrete from the viscosity of the asphalt binder used in the asphalt-aggregate mixture. Other parameters related to the aggregate gradation, such as, fineness modulus, and uniformity coefficient and the parameters related to the mixture volumetric are also incorporated in this model. A total of 21 asphalt concrete mixtures with asphalt binders having different performance grades and Superpave gradations were collected from different mixing plants and paving sites at various regions of New Mexico. The collected mixtures were then compacted, cored and sawed to cylindrical specimens. The asphalt concrete specimens were then tested for dynamic modulus at different temperatures and loading frequencies. The time-temperature superposition principle was then applied to develop dynamic modulus mastercurves at 70 °F (21.1 °C) reference temperature. The mastercurves were then fitted by the sigmoid function. The parameters of the sigmoid function were then correlated to the physical attributes of the asphalt concrete samples. Finally, a predictive model is developed to estimate the dynamic modulus of the AC mixtures typically used in New Mexico. Statistical evaluation showed that a fairly accurate estimation of dynamic modulus can be found by using this new dynamic modulus predictive model.

Commentary by Dr. Valentin Fuster
2016;():V014T11A017. doi:10.1115/IMECE2016-66437.

In order to predict dynamic recrystallization (DRX) texture evolution during the forming processes of aluminum alloy, we propose a hypothesis to predict DRX evolution and develop a comprehensive computational tool for the thermal process metallurgy simulation. It consists of the two-scale finite element method based on the thermo-coupled elasto-crystalline plasticity analysis and the dynamic-explicit finite element procedure. It can predict the heat generation and diffusion, and plastic anisotropy at the macro-scale, and the crystal texture evolution including DRX due to the plastic deformation and heat generation at the micro-scale.

The computationally evaluated texture evolution, which includes DRX texture, under the severe compression at high temperature is compared against the experimental results of pole figures and orientation distribution function (ODF) analyses. The results predict the evolution of the cube component which is observed in the experiments. Therefore, our proposed method is approved to have a potential predicting DRX texture evolution. Furthermore, we clarify the effect of DRX texture on the onset of such instabilities as necking, surface instability and shear bands which are closely related to the formability or failure of the materials.

Commentary by Dr. Valentin Fuster
2016;():V014T11A018. doi:10.1115/IMECE2016-66524.

Laser shock peening (LSP) is one of the innovative technique that produces a compressive residual stress on the surface of metallic materials, thereby significantly increasing its fatigue life in applications where failure is caused by surface-initiated cracks. The specimens were treated with laser shock waves with different processing parameters, and characterization studies were made on treated specimens. The purpose of the present study was to investigate the influence of Nd:YAG laser on commercially pure titanium (CP-Ti) used in prosthetic dental restorations. The treatment influenced change in microstructure, micro hardness, surface roughness, and wear resistance characteristics. Though CP-Ti is considered as an excellent material for dental applications due to its outstanding biocompatibility, it is not suitable when high mastication forces are applied. In the present study, pulsed Nd:YAG laser surface treatment technique was adopted to improve the wear resistance of CP-Ti. The wear test pin specimens of CP-Ti were investment cast with centrifugal titanium casting machine. The wear properties of specimens were evaluated after LSP on a “pin-on-disc” wear testing tribometer, as per ASTM G99-05 standards. The results of the wear experiment showed that the treated laser surface has higher wear resistance, micro hardness, and surface roughness compared to as-cast samples. The improvement of wear resistance may be attributed due to grain refinement imparted by LSP processes. The microstructure, wear surfaces, wear debris, and morphology of the specimen were analyzed by using optical electron microscope, scanning electron microscope, and X-ray diffraction (XRD). The data were compared using ANOVA and post-hoc Tukey tests. The characteristic change resulted in increase in wear resistance and decrease in wear rate. Hence, it is evident that the more reliable and removable partial denture metal frameworks for dental prostheses may find its applications.

Commentary by Dr. Valentin Fuster
2016;():V014T11A019. doi:10.1115/IMECE2016-66612.

Coatings are extensively used in many areas including industrial and medical fields to serve various functions as corrosion resistance, wear resistance and antibacterial purposes. Copper and copper alloys are among the most widely applied coating materials for several industrial and medical applications. One of their widely used copper coating applications is in the antibacterial coating area. Most of the research done in this field focuses on the antibacterial behavior with no comprehensive assessment regarding their mechanical properties, such as hardness and adhesion strength. In this work, mechanical assessment of strength and hardness of pure copper and several copper alloys including Cu Sn5% P0.6%, Cu Ni18 Zn14 (German silver), and Cu Al9 Fe1 are studied experimentally and numerically. All coatings are deposited on stainless steel substrate disks of 25mm diameter by wire-arc thermal spraying at the center of advanced coating technologies, University of Toronto. All coatings are 150 microns in thickness, with two additional thicknesses up to 350 microns for Cu Ni18 Zn14 (German silver) and Cu Al9 Fe1. The effect of the coating thickness and composition on the mechanical properties is studied for all the copper and copper alloy samples with the varying thicknesses between 150 and 350 microns.

Scanning Electron Microscope (SEM) is used to study the surface as well as the cross-sectional microstructure of the coatings. Vickers micro-indentation tests are used to evaluate hardness at various locations on the cross-section of the coating and the substrate. This is used to evaluate the effect of the deposition of the coating material, and the subsequent solidification, on the hardness of the coating layer as well as the substrate near the coating interface. Pull-off adhesion tests are performed to evaluate the effect of the coating composition and thickness on the strength of the coatings. Tests are carried out to compute the pull-off failure stress that causes the delamination between the coating and the substrate. Computational analysis will be used to calibrate the experimental data when available by means of finite element analysis.

The preliminary pull-off tests show interesting results as the samples with lower coating thicknesses exhibit delamination at higher strengths. This is due to the increase in residual stresses in higher thicknesses building up during the deposition process. Some of the samples did not even fail at lower thicknesses of 150 microns. A comprehensive analysis between the adhesion strength and hardness will be very useful in understanding the effect of coating composition and thickness on the mechanical properties of the coating.

Commentary by Dr. Valentin Fuster
2016;():V014T11A020. doi:10.1115/IMECE2016-66635.

This article investigates the influence of finite element model features on Fiber Reinforced Polymer (FRP) crushing simulation results. The study focuses on two composite material tube models using single shell modeling approach. The chosen material model is MAT58 (*MAT_LAMINATED_COMPOSITE_FABRIC) from the commercial finite element analysis software LS-Dyna. The baseline models geometry and material parameters come from a model calibration conducted for lightweight vehicle investigation. Five parameters are investigated. The mesh size and the number of integration point (NIP) are generic and ERODS, TSIZE and SOFT are the non-physical parameters of MAT58. This analysis aims at discuss the influence of these parameters on the simulation results focusing on the initial force peak and the average crush load, regarding results realism and instabilities such as large elements deformation and abnormal peak values. Also, the impact of the number of CPUs involved in the simulation calculation is presented. Recommendations are given to set the mesh size and the NIP. TSIZE value should be selected regarding the simulation time step. On the other hand, ERODS has to be adjusted manually. Both are determinant for simulation robustness. Further studies are proposed to find out the reasons of large element deformation.

Topics: Fibers , Simulation , Polymers
Commentary by Dr. Valentin Fuster
2016;():V014T11A021. doi:10.1115/IMECE2016-66835.

Bone is a living tissue that constantly remodels and adapts to the stresses imposed upon it. Bone disorders are of growing concern as the median age of our population rises. Healing and recovery from fractures requires bone cells to have a 3-dimensional (3D) structural base, or scaffold, to grow out from. In addition to providing mechanical support, the scaffold, an extracellular matrix (ECM) assembly, enables the transport of nutrients and oxygen in and removal of waste materials from cells that are growing into new tissue. In this research, a 3D scaffold was synthesized with chitosan (CS), carboxymethyl chitosan (CMC), calcium phosphate monobasic and magnesium oxide (MgO). CS is a positiviely-charged natural bioactive polymer. It is combined with its negatively-charged derivative, CMC, to form a complex scaffold. Magnesium phosphate biocement (MgP), formed by reacting calcium phosphate monobasic and MgO, was incorporated into CMC solution before adding CS solution. Scaffolds were prepared by casting, freezing and lyophilization. The scaffolds were characterized in terms of pore microstructures, surface topography, water uptake and retention abilities, and crystal structure. The results show that the developed scaffolds exhibit highly interconnected pores and present the ideal pore size range (100–300 μm) to be morphometrically suitable for the proposed bone tissue engineering applications. These scaffolds not only mimic the nanostructured architecture and the chemical composition of natural bone tissue matrices but also serve as a source for soluble ions of magnesium (Mg++) and calcium (Ca++) that are favorable to osteoblast cells. The scaffolds thus provide a desirable microenvironment to facilitate biomineralization. These observations provide a new effective approach for preparing scaffold materials suitable for bone tissue engineering.

Commentary by Dr. Valentin Fuster
2016;():V014T11A022. doi:10.1115/IMECE2016-66900.

Magnesium injection is a suitable approach for replenishment of its ions (Mg++) during neural or tissue injury and stroke to avoids risks associated with abnormally low level of Mg++ in blood. In this study, alginate encapsulated magnesium sulfate microbeads were fabricated by the electrospraying technique for Mg++ delivery. Microbeads were evaluated for particle size and surface morphology using inverted optical microscopy and scanning electron microscopy (SEM) respectively. Average particle size of 200–500 μm for hydrated and 50–200 μm for dry beads were observed. An in vitro release study of Mg++ was performed; revealing a cumulative release of ∼50% within first 24 h. This strategy can potentially be useful for the targeted local delivery of magnesium at required concentrations and subsequently enhance the therapeutic efficacy of magnesium in treating tissue injury or stroke.

Commentary by Dr. Valentin Fuster
2016;():V014T11A023. doi:10.1115/IMECE2016-66924.

This work focuses on the effect of using sandblasted Aluminum 5052 H36 sheets as reinforcement in fiber metal laminates (FMLs) containing glass fiber - kevlar epoxy layers and glass fiber - carbon fiber epoxy layers. Two sets of 9 layered composites were fabricated using compression moulding technique as follows 1) Sand blasted Aluminum 5052 H36 sheet, S-Glass fiber, Kevlar fiber, with epoxy matrix. 2) Sand blasted Aluminum 5052 H36 sheet, S-Glass fiber, carbon fiber with epoxy matrix. Flexural experimentation of the composites was done to investigate delamination under bending loads. Izod Impact studies were performed to determine the notch toughness of the composites and also to study the debonding under impact loading. Flexural results revealed no delamination between the sandblasted Aluminum 5052 H36 - fiber interlayers owing to the increase in the surface roughness of the duralumin sheets through sand blasting, while pronounced delamination was observed between fiber - fiber interlayers. Impact testing of the composite also showed no delamination between Aluminum 5052 H36 - fiber interlayers and a brittle fracture surface was observed. Thus sand blasting of the Aluminum 5052 H36 layers proves to be a beneficial technique in overcoming the inherent problem of delamination in FMLs.

Topics: Sands
Commentary by Dr. Valentin Fuster
2016;():V014T11A024. doi:10.1115/IMECE2016-67012.

This work will provide a method to characterize a variety of novel high performance lubricants. In particular, surface tension and evolution of the contact angle on a variety of surfaces over one hour were recorded. Contact angles were measured using a ramé-hart Goniometer. Surface tension is measured with the same device using the pendant drop method. Fluids studied here include: trihexyltetradecylphosphonium bis(trifluoromethyl-sulfonyl) amide ([THTDP][NTf2]), trihexyltetradecyl-phosphonium decanoate ([THTDP][Deca]), 1-ethyl-3-methyl imidazolium trifluorosulfonyl imide ([EMI][NTf2]), and 1-ethyl-3-methyl imidazolium trifluoromethanesulfonate ([EMI][FMS]). Contact angles were measured on the following surfaces: AISI 52100 stainless steel disks polished to 0.01–0.05 μm, Kapton, SU-8, Teflon, and glass slides. The resultant change in roughness on 52100 steel disks was measured to provide insight into the corrosive properties of each liquid.

Topics: Lubricants
Commentary by Dr. Valentin Fuster
2016;():V014T11A025. doi:10.1115/IMECE2016-67174.

A new experimental set-up has been built to characterize the permeation of polymeric materials. The permeation of Helium through the variety of polymers under different pressure and temperature conditions was investigated. In this study, the gas permeation measurement has been performed by a high temp/high pressure gas permeation cell. Constant volume (variable pressure) procedure was considered to design this gas permeation cell. In designing the permeation cell, special consideration has been made to build a permeation cell appropriate for testing polymer samples in high pressure (up to 1200 psi) and high temperature (up to 100 °C) conditions. The permeation cell consists of two gas chambers; the high pressure side and the low pressure side. A modular plugin has been designed to place inside the gas chambers, and make the design adjustable for testing polymers with different thicknesses. Pressure change in the lower side of the set-up was measured by sensitive pressure transducer. Downstream pressure side will be used in the calculation of gas flux and gas permeation coefficient of polymers, along with differential pressure applied to polymer sample and test conditions as temperature. The results of permeability measurements for thick polymer samples at different pressure and temperature showed that the effects of increasing temperature on gas permeation are prominent compared to effect of high pressure condition.

Commentary by Dr. Valentin Fuster
2016;():V014T11A026. doi:10.1115/IMECE2016-67227.

This study aims to evaluate the wear resistance of magnesium - ceramics nano-layered thin films using atomic force microscopy technique (AFM). The magnesium - ceramics (alumina and silica) nano-layered thin films were deposited on glass substrate using DC and pulsed DC reactive magnetron sputtering process. The surface roughness and wear properties were evaluated with the single crystal diamond tip. It was observed that the surface roughness was increased with the growing of number of layers for both magnesium-ceramics films. The nano-tribology test shown a strong adhesion between Mg-ceramic layers. The results shown that alumina nanolayers more efficient to resist against wear than silica. It was also noted that magnesium-silica nanolaminates loosing wear-resistance with the increasing number of layers.

Commentary by Dr. Valentin Fuster
2016;():V014T11A027. doi:10.1115/IMECE2016-67286.

Shape memory polymers can be triggered to assume memorized shapes from temporarily deformed forms using thermal stimuli. This paper focuses on the characterization of the shape memory behaviors observed in selected 3D printable photo-cured polymer parts and filament with specified fillers. The shape recovery ratio and recovery time were analyzed using 3D printed specimens with 90° bends. Parts with the mixture of selected commercially available polymers — a rigid polymer (RP) and two digitally mixed polymer blends (DB-A and DB-B) were 3D-printed on a multi-material 3D printer capable of producing digital materials with variable mix ratios. The recovery ratios were determined after thermal triggering and after long-term creep (self-recovery) without thermal triggering. The 3D printed parts were heated to above their glass transaction temperature to train temporary shapes and the recovery of original shapes after a thermal trigger was monitored using a high-resolution camera. Long-term self-recovery (non-triggered) was also studied by observing the parts after temporary shape has been trained, as the try to regain their original shape over several days of slow recovery. The recovery of bending angles was quantitatively recorded from the images taken during the shape recovery process. The recovery due to thermal triggers was monitored under a high resolution microscope by reheating with hot water at 90°C. Experiments of long-term self-recovery at room temperature included monitoring of several parts by taking periodic images of the specimens using a resolution camera. The effect of inclusion of fillers on the shape recovery characteristics was also investigated. Silicon Carbide (SiC) with different weight fractions were mixed into PLA powders. Continuous filaments were extruded using a single screw extruder. The recovery time of thermal activation recovery was then characterized to determine the effect of addition of the fillers. The effect of material-mix ratio, initial printed orientation, filler type on the recovery ratio and recovery time are described in this paper.

Commentary by Dr. Valentin Fuster
2016;():V014T11A028. doi:10.1115/IMECE2016-67314.

Twisted and Coiled Polymer (TCP) muscles are soft actuators made by inserting twist in a precursor fiber while attaching a dead weight at the end, followed by heat treatment. TCP muscles are thermally driven actuators with high power to weight ratio, large strain and low cost. These muscles have a wide variety of applications in engineering, specifically for robotics since these actuators have large linear deformation in response to applied power (Joule’s Effect). The performance of these muscles depend on numerous fabrication parameters such as speed of the coiling, dead weight used, precursor fiber type, number of filament in precursor fiber, number of plies and training cycles. An in-depth study of the fabrication parameters is required to understand the performance of the muscles. We have designed experimental setup to study the performance of the muscles on different input parameters such as load, current, voltage and output results such as displacement, force and temperature. We present the study of single, double and tripled plied muscles that are fabricated by plying together a twisted and coiled filament. Further, the power consumption of the muscles under various conditions is discussed. This study would help to establish a procedure to fabricate these materials with consistent properties.

Commentary by Dr. Valentin Fuster
2016;():V014T11A029. doi:10.1115/IMECE2016-67546.

Hot micro-extrusion is a good candidate for manufacturing metallic micro-components with complex shapes and good mechanical properties. This work investigates the metal flow in direct hot mini- and micro-extrusion of aluminum rods by means of FEM simulations. The presented study was performed with the numerical code DEFORM 2D in axisymmetric mode using experimental material flow curves for AA 7108 alloy. The mini-to-micro dimensional scale factor was 10:1, the extrusion ratios were 16 and 100, and the ram speed ranged from 0.5 to 20 mm/s. The initial temperature of the billet and tooling was 400°C. It was found that the temperature distribution within the profile changed from mini-to micro-extrusion and the effect was more significant for higher reductions and higher extrusion velocities. The micro-scale effect increased the stress field just behind the die orifice and the material deformed more intensely to achieve the equilibrium velocity. It was shown that the scale effect can be evaluated by the relative strain rates associated with the corresponding deformation. Numerical results indicate that the scale effect does increase exponentially with the velocity and it becomes significant for final exit velocities of the extrudate above the level of 100 mm/s.

Commentary by Dr. Valentin Fuster
2016;():V014T11A030. doi:10.1115/IMECE2016-67604.

In this paper a nanocomposite system is proposed to accurately measure pressure generated by normal force. The strain sensing function is achieved by correlating the piezoresistance variations to the normal force applied on the sensing area. Due to the conductive network formed by carbon nanofibers (CNFs) and the tunneling resistance change between neighboring CNFs the electrical resistance measured using the four-probe method shows a clear correlation with the load conditions. In order to improve the piezoresistivity, CNF are uniformly dispersed in PDMS using a solvent assisted ultra-sonication method. The proposed nanocomposite based strain sensor is experimentally characterized under both quasi-static and cyclic load conditions.

Commentary by Dr. Valentin Fuster
2016;():V014T11A031. doi:10.1115/IMECE2016-68035.

The major global drivers affecting our societies are listed as human population, food based security, energy based security, resource depletion, emissions and associated climate change, community safety, transportation and economic globalization. Out of these global drivers, the most important global driver is identified to be the human population. In order to satisfy the increasing needs of the growing population, manufacturing sector is facing rapid growth and technological developments. Environment friendly design and manufacturing methods are attaining high importance for sustainable development. As manufacturing sector deals with different input resources and waste streams, there is a need to make these developments economically feasible and sustainable in nature. Thermodynamic assessment methodologies can provide an efficient way of quantifying input and output streams. In order to have better understanding of the energy flow involved in the manufacturing process, there is a need to explore a methodology based on the principles of thermodynamics to assess the manufacturing process. The application of second law of thermodynamics, in the form of exergy analysis, is very helpful in improving the efficiency of the process. In addition exergy analysis has potential to reveal the energy inefficiencies present within the process generally mentioned as exergy losses. The increase in exergy efficiency of the process decreases the environmental impact. The presented study provides an overview of implementing exergy analysis for a metal cutting operation of discrete nature. The study revealed that efficiency of removal can be increased by optimizing the input raw materials and electrical energy supplied during the cutting operation.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Modeling and Experiment of Multifunctional Materials in Extreme Environments

2016;():V014T11A032. doi:10.1115/IMECE2016-65727.

Cryogenic tanks are devices that are commonly used to store extremely low temperature fluids, usually in their liquid state. Cryogenic fuel tanks carry cryogenic propellants such as liquid oxygen, liquid methane or liquid hydrogen, at subfreezing temperatures in its condensed form in order to generate highly combustible liquids. This type of tank is exposed to an extremely cold temperature in its interior and to ambient temperature on its external surface resulting in large temperature gradient across the thickness of the wall. In this paper, hybrid textile composites with carbon and Kevlar® fabric are explored as means to reduce the influence of thermal gradient in order to enhance the material performance when cryogenic propellant fuels are stored in spacecraft applications.

Previous initial studies of tensile and flexural tests have indicated that carbon and Kevlar® textile composites are suitable materials for cryogenic temperatures. The pristine mechanical properties of carbon composites changed within a maximum of 3–4% after initial cryogenic exposure during the fueling stage, while 17% for Kevlar® composites. Computational models of hybrid carbon-Kevlar® composites were subjected to cryogenic temperature (77 K) to investigate the effect of exposure for extended periods and to aid in the design of optimum layups for the same. Six optimal combinations were selected that resulted in low interface stresses and lower number of peak stresses through the thickness of the laminate. These layups were deduced to perform better compared to other layups due to lesser susceptibility to delamination type failure upon cryogenic exposure. Experimental investigation of the chosen hybrid composites has revealed few optimum combinations for use in tanks. As a next step, computational analysis of cryogenic exposure to only one surface of hybrid composites was performed to simulate the composite wall containing the liquid fuel. Based on the suggestions from the computational models, experiments to determine optimum designs of the composite wall were conducted. An ABS plastic insulating holder was computationally designed and 3D printed to hold the specimens such that only one surface is exposed to LN2. A total of eight composite layups were exposed to liquid nitrogen using the plastic holder to study their response to thermal gradient cryogenic exposure. Based on the results obtained computationally and supported by experiments, optimum hybrid layups of composites to sustain cryogenic exposure were determined.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Modeling, Simulation and Design of Multifunctional Materials

2016;():V014T11A033. doi:10.1115/IMECE2016-65811.

The high strength steel (HSS) material has high stiffness and strength, which is usually applied in load-bearing structures, but its density is also high. Although the glass fiber-reinforced polymer material has low density and excellent designability, they couldn’t be used as a structural element solely due to their low absolute stiffness and strength. Polymer Metal Hybrid (PMH) structure combines polymer with metal and take the advantages of both materials, that could achieve both the light weight and excellent mechanical properties. The main problem of this technology is that the connection strength of the two materials is possibly insufficient.

This paper takes the polypropylene/HSS hybrid structure as an example, and studies the influence of interface microstructure on the bonding strength. Four kinds of micro-scale interfacial geometry, the smooth plane, the triangular serrated, the rectangular serrated and the mechanical interlocking, were modeled and compared with each other. Under the bending, tension and shear loads, respectively, the bonding strength were studied using analytical method and finite element simulation. It is found that the microstructure geometry has important effect on bonding strength. When the ratio of polymer modulus to HSS modulus rises, the interfacial strength and ultra load-bearing capability increase. The optimal interfacial microstructure for different loading type will be suggested, which is very useful to the design of PMH structures.

Topics: Metals , Joining , Polymers
Commentary by Dr. Valentin Fuster
2016;():V014T11A034. doi:10.1115/IMECE2016-67071.

Computational design for property management of composite materials offers a cost sensitive alternate approach in order to understand the mechanisms involved in the thermal and structural behavior of material under various combinations of inclusions and matrix material. The present study is concerned with analyzing the elasto-plastic and thermal behavior of Al2O3-Ni droplet composites using a mean field homogenization and effective medium approximation (EMA) using an in-house code. Our material design approach relies on a method for predicting potential optimum thermal and structural properties for Al2O3-Ni composites by considering the effect of inclusion orientation, volume, size, thermal interface resistance, percolation and porosity. The primary goal for designing such alumina-based composites is to have enhanced thermal conductivity for effective heat dissipation and spreading capabilities. At the same time, other functional properties like thermal expansion coefficient, elastic modulus, and electrical resistivity have to be maintained or enhanced. The optimum volume fraction was found to occur between 15 and 20 vol. %Ni while the average nickel particle size of 5 μm was found a minimum size that will enhance the thermal conductivity. The Young’s modulus was found decreasing as the volume fraction of nickel increases, which would result in enhanced fracture toughness. Electrical conductivity was found to be greatly affected by the percolation phenomenon in the designed range of volume fraction minimum particle size. As a validation, Al2O3 composites with 10% and 15% volume fraction Ni and droplet size of 18 μm are developed using spark Plasma Sintering process. Thermal conductivity and thermal expansion coefficient of the samples are measured to complement the computational design. Microstructural analysis of the sintered samples was also studied using optical microscope to study the morphology of the developed samples. It was found that the present computational design tool was accurate enough in predicting the desired properties of Al2O3-Ni composites.

Commentary by Dr. Valentin Fuster
2016;():V014T11A035. doi:10.1115/IMECE2016-68135.

A procedure for broadband topology optimization is applied to the design of acoustic cloaking. An acoustic cloak conceals a given object with arbitrary shapes. That is, the object can be made undetectable with respect to acoustic wave propagation in a specific frequency range. The guided acoustic wave in a given direction will re-attach in the incident direction, leading to a minimized norm of the scattering field. Gradient-based topological optimization is accomplished using a time-dependent adjoint formulation for sensitivity analysis. Results indicate that the current methodology produces improved cloaking performance for narrowband near a target frequency, and as expected less than optimal performance is observed away from this frequency. For topology optimization over a broadband, improved performance is realized over the entire frequency range, but not necessarily optimal at any given target frequency.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Nanoengineered, Hierarchical, and Multi-Scale Materials

2016;():V014T11A036. doi:10.1115/IMECE2016-66221.

Through strain-induced morphological instability, protruding patterns of roughly commensurate nanostructures are self-assembled on the surface of spherical core/shell systems. A three-dimensional (3D) phase field model is established for closed substrate. We investigate both numerically and analytically the kinetics of the morphological evolution, from grooves to separated islands, which is sensitive to substrate curvature, misfit strain and modulus ratio between core and shell. The faster growth rate of surface undulation is associated with the core/shell system of harder substrate, larger radius or misfit strain. Based on a Ag core/SiO2 shell system, the self-assemblies of protruding SiO2 nano-islands are explored experimentally. The numerical and experimental studies herein could guide the fabrication of ordered quantum structures via surface instability on closed and curved substrates.

Topics: Self-assembly
Commentary by Dr. Valentin Fuster
2016;():V014T11A037. doi:10.1115/IMECE2016-67612.

Biological materials (biomaterials) have had a marked increase in interest from the material science and engineering community due to unique characteristics and properties that are typically sought after in traditional engineering materials. During the last few decades, research on biomineralized composites such as abalone shell, fish armor, turtle shell, and human bone revealed that those biological systems possess a carefully arranged multilayered composite structure. Unlike metals, ceramics, and traditional composite materials; biomineralized composites often possess enhanced characteristics such as, penetration resistance high toughness, flaw tolerance energy dissipation, damage mitigation, and delamination resistance all while achieving high strength-to-weight ratios. In this research experimentally driven finite element modeling was used to investigate the elastic response for the biocomposite structure. The Atractosteus spatula (Alligator gar) was used as the model structure for determining the elastic properties.

Commentary by Dr. Valentin Fuster
2016;():V014T11A038. doi:10.1115/IMECE2016-67724.

Engineered man-made composite (inhomogeneous) materials are well known for their superior structural properties. Man-made composite materials and multilayered systems are widely used in civilian and military applications. The combined multilayered systems are attractive because they have the characteristics of being energy absorbent, lightweight, high-strength, high-stiffness, and can provide good fatigue and corrosion resistance. Although the engineered composites are promising and offer mutual exclusive material properties that are not found in other structural materials, they are prone to delamination at the glued layered interface.

In contrast to man-made composites, most superior performing materials found in nature possess a hierarchal biomineralized composite structure that tends to be delamination resistant. These delaminate resistant biocomposite structures [e.g. alligator gar’s (Atractosteus Spatula) exoskeleton fish scale] have mechanical properties that vastly exceed the properties of their relatively weak constituents. The fish scale is made up of 90 percent hard (inorganic minerals) and 10 percent soft (polymer-like organic collagen fibers) by volume. Nature integrates hard and soft materials at different length scales to form a two-layered composite that better resists delamination.

The objective of this research was to use scanning electron microscopy (SEM) and nanoindentation to investigate the delamination resistant behavior occurring at the layered interface for the alligator gar fish scale composite. The SEM imagery showed at the micron level the collagen bundles + B-Ap crystals (C/B-Ap) form a distinctive two-layered system that is connected by what is described as sawtooth geometrically structured interface. The outermost layer for the exoskeleton fish scale is called ganoine while the inner layer called bone. The layers interface seems to be mainly bonded by mechanical means using sawtooth notches, rather than the chemicals adhesives used in the man-made laminated planar interfaced composites. The notched regions for ganoine+bone materials overlap and are embedded at various depths within each layer to form periodic “repeating” bonded connections.

The indentation measurements taken at the nano-level showed that elastic moduli have property gradients occurring through the interfacial transition zone. Noticeably the ganoine layer has elastic moduli ranging from [98–67] GPa while the bone layer elastic moduli ranged from [20–13] GPa. The research findings indicate the sawtooth connections perhaps provide enhanced shear resistance at the interface and may help inhibit debonding. Additionally, the notched interlocking provides a less discrete (graded) interface, which seems to promote durability and delamination resistance.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Nanomaterials for Energy

2016;():V014T11A039. doi:10.1115/IMECE2016-66055.

Polymer nanofoams are classified as foams having pore size less than hundred nanometers. Several techniques to fabricate nano porous polymer morphologies have been developed. However, majority of them result in foams with a thin layer of un-foamed polymer on the surface. The un-foamed polymer region called the skin layer is typically around 10 microns thick which restricts the use of these foams for many applications such as filtration. Skinless open-celled nanofoams are a unique category of foams that have pore size on the order of tens to hundreds of nanometers and do not have any unfoamed solid polymer layer on the surface post processing. Potential applications for skinless nanofoams include filtration, catalysts, dielectronics and biological scaffolds. It has also been suggested that these nanofoams will have improved thermal and electric properties due to the open celled porous morphology and the absence of skin layer. In this study bulk skinless polyetherimide (PEI) nanofoams were fabricated using a novel two stage technique consisting of combined solid state and laser foaming. Initially, PEI samples embedded in a sacrificial polymer layer were saturated with supercritical carbon dioxide (CO2) as the blowing agent. The sacrificial layer ensures uniform gas concentration gradient at the surface during subsequent desorption step. A hot press and heated bath technique were used independently to foam the samples. Solid state foaming parameters — foaming time and number of sacrificial layers were varied to study their effect on the skin thickness of the nanofoams. The thickness of the sacrificial polymer layer had a direct effect on the skin thickness of the PEI nanofoams after foaming. It was observed that the skin thickness reduced by nearly 60% on an average due to the sacrificial layer. The solid state foaming process was followed by laser foaming using a CO2 laser with a wavelength of 10.6 μm to generate pores in the thin skin layer. The power intensity of the laser beam, the travel speed and the working distance had a direct effect on the laser pore formation process. It was found that a power intensity of 0.09 W, laser head travel speed of 50 mm/s and working distance of 4.5 cm were the most ideal conditions to form pores in a skin layer of approximately 10 micron thickness. The cross sections were observed using a scanning electron microscope to study the cell morphologies, pore size and the skin layer. This technique was found to produce skinless nanofoams with average porosity 78 % and smallest pore-size of 250 nm. To summarize, optimal parameters and processing conditions for bulk production of skinless PEI nanofoams are presented in this study.

Commentary by Dr. Valentin Fuster

Materials: Genetics to Structures: Processing, Structure and Property of Polymers and Composites

2016;():V014T11A040. doi:10.1115/IMECE2016-65147.

The constitutive response of glassy polymers is characterized by their complex thermo-mechanical behavior such as strain rate and temperature sensitive yielding, softening at small strains and re-hardening at large strains. These complex behaviors trigger strain localization in the deformation of polymers. Since localization can be induced by both structural and material instabilities, careful analysis needs to be performed to investigate the localization behavior of polymer specimen testing. Localization such as neck formation and propagation that typically occurs in the tensile and compressive testing of polymers and plastics makes it difficult for experimentalists to extract their intrinsic constitutive response. This problem is exacerbated when localization occurs with shear bands. In this study, a macromolecular constitutive model for polymers showing small-strain softening and large-strain directional hardening is employed to investigate the effect of localization in tension onto the constitutive identification process. Considering the complex interplay between the structural and constitutive instabilities, a method based on direct, real-time measurement of area reduction at the neck section has been proposed to extract the intrinsic constitutive response of polymer materials.

Topics: Polymers
Commentary by Dr. Valentin Fuster
2016;():V014T11A041. doi:10.1115/IMECE2016-66263.

Finite element simulation of composite materials is still challenging as anisotropy of the material brings difficulty in accurately identifying shear properties for modeling. In this study, ±45° tensile tests, Iosipescu shear tests, rail shear tests and Arcan shear tests are conducted to obtain the engineering shear stress-strain curve of woven fiber reinforced polymer. Digital image correlation method is adopted to obtain the strain field of the specimens. It is indicated that Iosipescu shear tests introduce a strain field close to pure shear state while the other three test types introduce relatively large tensile strain or compressive strain. Shear properties obtained from Iosipescu tests are used to calibrate an extensively used composite material model, Matzenmiller-Lubliner-Taylor (MLT) model. The calibrated MLT model is then verified by simulating Arcan tests with different loading angles. The simulations indicate that MLT model gives reliable predictions on Arcan tests with smaller loading angles, while it overestimates the force-displacement responses at larger loading angles.

Commentary by Dr. Valentin Fuster
2016;():V014T11A042. doi:10.1115/IMECE2016-66280.

GF reinforced polymer composites to improve the mechanical properties by increasing fiber content, but there is a limit. On the contrary, CF reinforced polymer composites are superior to the GF composites at a lower CF content in tensile and bending properties. However, CF is more expensive than GF. In this study, acrylonitrile butadiene styrene (ABS) was reinforced with single and hybrid reinforcing of glass fibers (GF) and carbon fibers (CF). The composites consisting of GF/ABS, CF/ABS and GF/CF/ABS were fabricated by direct fiber feeding injection molding (DFFIM). The reinforcing fiber was directly fed at the vent hole of the barrel in the DFFIM process. The effects of fiber Tex, fiber numbers and processing parameters on properties of the composites were investigated. Tensile, bending and Izod impact testing was conducted to compare mechanical properties of GF/ABS composites, CF/ABS composites and hybrid GF/CF/ABS composites. Morphology of the composites was observed by scanning electron microscopy. In addition, the cost advantage of each composite was compared with their mechanical properties. From the results, the addition of carbon fiber improved tensile, bending and impact properties of the hybrid composites. SEM photographs indicated that carbon fiber tended to agglomerate during the DFFIM process. The hybrid GF/CF/ABS composites presented an equivalent improvement in tensile and bending properties as compared to the CF/ABS composites. It can be noted that the low CF content was suitable for enhanced mechanical performances of the hybrid GF/CF/ABS composites. Therefore, the hybrid composites can be manufactured at a low cost as compared to the similar mechanical properties of the CF/ABS composites.

Commentary by Dr. Valentin Fuster
2016;():V014T11A043. doi:10.1115/IMECE2016-66573.

Due to many advantages of using natural resources, natural fibers have been used recently as a method of providing added strength and ductility to reinforced polymer composites. This is mainly due to their availability, renewability, low density, cost effectiveness as well as satisfactory mechanical properties. This paper presents fabrication and experimental characterization analysis of mechanical properties of a class of bio-composite in which polypropylene (PP) and low density polyethylene (LDPE) are reinforced with date palm frond fibers. Bio-composite sheets were fabricated with controlled processing parameters based on small factorial design in order to develop a statistical model for response using fractional design of experiment. In a Design of Experiment (DoE) procedure, we identify three different factors along with three different levels; fiber volume fraction (20, 40, and 60 vt. %), alkali treatment (10, 15, and 20 Wt. %), and treatment time (2, 4, and 6 h). In this study, NaOH alkali solution is used to modify the fiber properties and improve surface characteristics. The tensile and flexural strengths of specimens prepared according to ASTM standards were measured by direct physical testing. Also, the Response Surface Methodology (RSM) is adopted to analyze interactions among the input factors and their effect on overall mechanical properties of the fabricated composite. Results revealed that fiber length and percentage of NaOH treatment have a significant impact on the composite properties. The date palm frond reinforced polypropylene composites could serve as a potential material in broad range of industrial applications in which high strength is not a main design requirement.

Commentary by Dr. Valentin Fuster
2016;():V014T11A044. doi:10.1115/IMECE2016-67901.

The impact performance of several porous polymeric and metallic foam core sandwich composite systems were evaluated for their suitability for protecting vehicle occupants in the event of a low velocity impact. The material systems evaluated were glass/phenolic face sheets reinforced with Nomax honeycomb core, cross-ply carbon-fiber face sheets reinforced with aluminum honeycomb cores of different cell sizes, and aluminum metallic foam cores of different cell sizes. Lastly, an exploratory study using an extrusion type 3D-printed polystyrene foam structure that customized pore size, pore distribution were undertaken. The peak load and energy dissipation of the composite materials were experimentally measured. An instrumented large semispherical impactor (48 mm diameter) applied loads at constant strain rate on the order of 0.1 m/sec to a 50 mm × 50 mm coupon sized composite specimen with varying thicknesses. The impact damage to materials were also visually examined. The current material system used for some interior components (glass/phenolic face sheets reinforced with Nomax honeycomb core) reaches a maximum load in a small time duration and displacement, causing catastrophic local crushing and delamination events. It is expected that the failure can be spread out with these alternative material systems with varying pore size distribution so that the energy dissipation can be accomplished with a lower peak force to improve occupant safety.

Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: Reliability Methods

2016;():V014T14A001. doi:10.1115/IMECE2016-65269.

Fatigue damage is initiated through some “defects” on the surfaces of and/or inside the component and induced by the fatigue cyclic loadings. These “defects” are randomly scattered in components, and one of these “defects” will be randomly “activated” and finally developed to become the initial crack which causes the final fatigue failure. Therefore, the fatigue strength is inherently a random variable and should be treated by probabilistic models such as typical P-S-N curves. The fatigue cyclic loading could be presented or described in any form. But the fatigue loading spectrum can generally be grouped as and described by these five models: (1) a single constant cyclic stress (loading) with a given cyclic number, (2) a single constant cyclic stress with a distributed cyclic number, (3) a distributed cyclic stress (loading) at a given fatigue life (cyclic number), (4) multiple constant cyclic stress levels with given cyclic numbers, and (5) multiple constant cyclic stress levels with distributed cyclic numbers. The approaches for determining the reliability of components under fatigue loading spectrum of the models 1∼4 are available in literature and books. But few articles and books have addressed an approach for determining the reliability of components under the fatigue loading spectrum of the model 5. This paper will propose two approaches for addressing this unsolved issue. Two examples will be presented to implement the proposed approaches with detailed procedures.

Topics: Reliability , Stress
Commentary by Dr. Valentin Fuster
2016;():V014T14A002. doi:10.1115/IMECE2016-65327.

Based on the recent studies, the static fault tree technique is limited when applied to the analysis of time dependent complex systems. Dynamic fault Tree (DFT) method has been derived to address the limitation of the classic fault tree method. In this paper, the reliability is evaluated for of the position controlling system of the satellite using dynamical modeling techniques. Complexity of the current dynamic fault tree methods make them impractical in application to the real systems. At the first step, the simplification is made for solving the logical gates based on the Monte Carlo simulation. The flowchart has been developed for each of the dynamic gates. Then the simulation routines were developed to implement the suggested procedure to study the attitude control system of the LEO satellites. A new calculation algorithm has been introduced for solving C-SPARE gate which reduces the calculation time in comparison with that of the Monte Carlo method. LEO Satellite attitude control system are used here as the case study to demonstrate the proposed methodology.

Commentary by Dr. Valentin Fuster
2016;():V014T14A003. doi:10.1115/IMECE2016-65380.

This paper proposes a new GO method for repairable systems with multiple unstable operation states. First, multi-state signal flow and multi-state GO operator are defined, respectively. And the formula for calculating state probability of unit with multiple unstable operation states is deduced based on Markov theory. Furthermore, a new function GO operator named Type 19 is created to describe the unit stabilizing property. And its GO operation formulas for reliability analysis are deduced. On this basis, the reliability analysis process of multi-state repairable systems based on the new GO method is formulated. Then, this new GO method is applied in reliability analysis of Hydraulic Oil Supply System of a heavy vehicle. In order to verify the feasibility, advantage and reasonability of the new GO method, its analysis result is compared with those of FTA and the existing GO method for two-state repairable systems. All in all, this paper not only improves the theory of GO method, and widens the application of GO method, but also provides a new approach for reliability analysis of multi-state repairable systems.

Commentary by Dr. Valentin Fuster
2016;():V014T14A004. doi:10.1115/IMECE2016-65383.

This paper proposes a new reliability optimization allocation for multifunction systems based on GO methodology. First, two constraints functions are proposed, which are unit reliability constraint function and system reliability constraint function, respectively. The unit reliability constraint function consists of allocated reliability index of unit and the range of reliability index for unit. And the system reliability constraint function consists of the target reliability index of system, and the predicted reliability index of system obtained by using GO method and allocated reliability index of unit. Then, the objective function of optimization allocation problem is established to describe the system cost minimization taking into consideration costs of unit redesigned and unit selected versions. Based on above, the mathematic model of reliability optimization allocation problem for complex multifunction systems is established. In addition, an improved genetic algorithm is presented to solve this mathematic model. Furthermore, the process of the new reliability optimization allocation method for complex multifunction systems is formulated. Finally, the new method is applied in reliability optimization allocation of Power-shift Steering Transmission whose goal is to minimize the system cost. The results analysis shows that the system costs for different operation times turn to a relatively stable value, and the allocated reliability indexes of unit are satisfied with engineering requirements. All in all, this new optimization allocation method can not only obtain the reasonable allocation results quickly and effectively, but it also can overcome the disadvantages of existing reliability optimization allocation methods for complex multifunction systems efficiently. In addition, the analysis process shows that the reliability optimization allocation method based on GO method can provide a new approach for the reliability optimization allocation of multifunction systems.

Commentary by Dr. Valentin Fuster
2016;():V014T14A005. doi:10.1115/IMECE2016-65384.

This paper proposes a new composite allocation method, which is composed by improved Fuzzy-AHP allocation method, old system data correction allocation method, and optimization allocation method. The objective of the new method is to minimize the system cost and allocate the reliability index and maintenance index of unit with the goal of system availability and the balance between them. For the solution of optimization problem in this paper, in order to prevent local optimum, improve the convergence efficiency and get the satisfied optimal solution, the improved GA method is put forward in this paper. First, the reliability index allocation method is proposed by the combination of optimization allocation method with the objective of minimum cost, the improved Fuzzy-AHP method with the consideration of the experts’ expectations, and the old system data correction allocation method. Then, based on constraints of the unit reliability allocation index and system availability index, the maintain-ability allocation method is proposed considering the minimum maintenance cost. In addition, the process of this new composite allocation method is formulated in this paper. Finally, the system cost, reliability index, and maintenance index of an integrated transmission device of an armored vehicle are allocated by this new composite allocation method. The result analysis shows that the allocation result of this new composite allocation method is reasonable and has engineering applicability. All in all, this new composite allocation method not only synthetically considers experts’ expectations on the new system, system cost and system reliability baseline information, but also associate the reliability index and maintenance index with the goal of system availability. In addition, this paper provides a new approach for reliability index & maintenance index of complex repairable systems in the early stages of product design.

Commentary by Dr. Valentin Fuster
2016;():V014T14A006. doi:10.1115/IMECE2016-65387.

This paper presents a reliability prediction approach based on non-probabilistic interval analysis. First, the interval reliability prediction model is presented from the aspects of interval stress-strength interference model, and interval reliability criteria. Then, the process of reliability prediction approach based on non-probabilistic interval analysis is formulated. Furthermore, the reliabilities of Planetary Gear Drive Mechanism at different confidence coefficients are obtained by using the non-probabilistic interval analysis. Finally, the result of the non-probabilistic interval analysis is compared with the results of non-probabilistic convex model and the probabilistic method in order to illustrate the characteristics, advantage and engineering practicality of the non-probabilistic interval analysis. In addition, the consistent relationship between reliability by using probabilistic method and the reliability by using non-probabilistic method is preliminarily discussed. All in all, this paper not only takes planetary gear drive mechanism as a case to provide guidance for mechanisms by using the reliability prediction approach based on non-probabilistic interval analysis, but also it points out the direction for future research of the reliability prediction approach based on non-probabilistic interval analysis.

Topics: Reliability
Commentary by Dr. Valentin Fuster
2016;():V014T14A007. doi:10.1115/IMECE2016-66404.

The distribution and parameters of the random variables is an important part of conventional reliability analysis methods, such as Monte Carlo method, which should be known fist before using these methods, but it is often hard or impossible to obtain. Model-free sampling technique puts forward a method to get the distribution of the random variables, but the accuracy of the extended sample generated by it is not enough. This paper presented an improved model-free sampling technique, which is based on Bootstrap methods, to increase the accuracy of the extended sample and decrease the iteration times. In this improved model-free sampling technique, the method of the selection of initial sample points and the generation of iterative sample is improved. Meanwhile, a center distance criterion, which considers the local characteristics of the extended sample, is added to the generating criterion of dissimilarity measure. The effectiveness of this improved method is illustrated through some numerical examples.

Commentary by Dr. Valentin Fuster
2016;():V014T14A008. doi:10.1115/IMECE2016-67206.

Mission assurance requires due diligence reliability for space systems taking into account limited accessibility, high uncertainty on the life data and high cost of failure. The methods based on physics of failure are promising approaches for durability evaluation of these systems. In this study, the reliability analysis is aimed for space structures, with the focus on fatigue failure. In this research, the deterministic fatigue simulation is conducted on space systems (satellite in orbit, low-level LEO1, made of aluminum 2024-T3), using models with constant and variable amplitude loading. Walker and Forman models are preliminary utilized in this study for life prediction for benchmarking with the experimental results. In the case of the variable amplitude loading, due to the amount of plastic zone in crack tip, the fatigue crack growth rate will be ceased in case of overload. Deterministic crack growth simulation was numerically simulated by using the MATLAB software and has been compared with commercial AFGROW software for verification and was observed proper match with experimental data. In the analysis of stochastic fatigue crack growth, uncertainty is analyzed by using the Monte Carlo simulation. The universal stochastic crack growth model proposed by Yang and Manning, was used for reliability analysis based on giving probabilistic method for the purpose of power and second polynomial models. In this study, these models are evaluated and three models of I) rational model, II) exponential model and III) global model are proposed. In uncertainty analysis, it is observed that by increasing the crack length, uncertainty range is widening. In case of constant amplitude loading with the same stress intensity factor range but different stress ratio, the uncertainty range was widening with increasing stress ratio. In reliability analysis, the exponential model demands less computational resources however it has a lower accuracy. The fractional model, proposed in this research, is based on the modification to Forman model. However, these models don’t consider geometric factor. The Global model, another model proposed in this research, has the capability of considering this aspect. In multiplicative stochastic factor (Yang and Manning method), accuracy of the approximation is most important. By improving the accuracy of this relation, the result accuracy is enhanced. For this purpose, for increasing efficiency of this method, the accuracy of approximation must be increased by corrections prior models or provide new accurate models.

Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: Risk Assessment and Management

2016;():V014T14A009. doi:10.1115/IMECE2016-65443.

Following several recent serious rail accidents in North America, changes in regulation and increased public awareness is driving the need to address gaps in rail safety. The industry and regulators have numerous safety initiatives; however prescriptive standards in combination with a performance based approach could be a powerful tool for understanding and mitigating risk in a cost effective way.

This paper reviews the principles of safety risk management that can be applied to safe transportation of flammable hydrocarbons by Rail. FRA’s proposed rulemaking on Risk reduction program and its potential impact on the industry are also addressed.

The approach proposed in this paper focuses on existing and new proposed safeguards/barriers and how they could be monitored and managed. The paper aims to set the path forward for structured risk based thinking in managing rail safety. The first part of the paper explains the barrier based risk assessment approach using the Lac Megantic accident as an example. A bow-tie is developed to deconstruct the incident timeline and to capture the safeguards that existed at that time and their working status. This diagram cross references Transport Canada’s Investigation findings.

The second part of the paper evaluates the new mitigation measures proposed by FRA HM 251 rulemaking (“Enhanced Tank Car Standards and Operational Controls for High-Hazard Flammable Trains” Final Rulemaking”) as potential safeguards and their impact on the overall risk of transportation. A baseline risk is first established for transporting crude by rail assuming some common safeguards in place. A simple Quantitative Risk Assessment methodology including likelihood and consequence was then used to estimate the base case risk. Risk mitigation and effect of any additional new measures like Changes in Rail Tank Car design, Oil Conditioning, Enhanced Braking to Mitigate Damage in Derailments, Speed Limit changes, Positive train control, Train manning when loading are assessed.

Commentary by Dr. Valentin Fuster
2016;():V014T14A010. doi:10.1115/IMECE2016-66791.

A critical component of Systems Engineering (SE) is to conduct a thorough Risk Analysis. This paper introduces a novel hybrid approach to develop a Reliability-Risk modeling technique that will be able to rank Conceptual Designs as a function of Reliability. A traditional SE approach is used to identify all success modes associated with the development of a complex system. Also, the Advanced Development, Design, Integration & Evaluation, Production, and Operation & Support that make up the major phases of a Systems Engineering model will define the Holographic Reliability Design Space.

Requirements of the System under development are captured through the implementation of the Integration Definition Function Modeling (IDEF0) technique. The IDEF0 method is defined in Federal Information Processing Standard Publications (FIPS PUBS 183) [1]. Using the developed IDEF0 model allows the function of each component to be identified, the proper reliability model to be chosen, and completion of a reliability-based analysis.

Upon completion of calculating the reliability, for each criterion, the use of a Multi-Criteria Decision System (MCDS) is required to rank the conceptual designs in terms of reliability. A MCDS was developed to analyze the conflicting objects inherent during the design and integration of any complex system.

The model developed herein was used to analyze 5 Packaging Conceptual designs being considered to become part of the military Supply Chain. After completion of the analysis, the new and innovative Packaging Designs were ranked based on reliabilities associated with design, test, integration, manufacturing, and incorporation into the existing Supply Chain. The results of the rankings were then presented to the ultimate decision makers for final approval.

Commentary by Dr. Valentin Fuster
2016;():V014T14A011. doi:10.1115/IMECE2016-67161.

Gas stations and distribution facilities in a city districts may result in accidents resulting injuries to people living in these districts. The exposure to released toxic materials is another hazard in gas pipes accidents. Therefore, the gas pipe lines demand special attention for the risk assessment.

In this study, the gas pipe rupture is evaluated on accident scenarios. Fire concentration range has been analyzed for different scenarios for cold and hot weather conditions. Furthermore, the length of flame has been simulated based on releasing flammable materials with governed equations. For modeling, PHAST software is utilized for simulation of released material in environment by considering the atmosphere conditions. Sensitivity analysis is done by 10% increasing in important parameter of problem. As a case study, a gas station is used for assessment. Furthermore, the results are verified with some published studies.

Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: Safety and Risk in Transportation

2016;():V014T14A012. doi:10.1115/IMECE2016-65013.

Modern methods for analyzing motor vehicle deformation rely upon a force-deflection analysis to determine deformation work energy. Current methods provide acceptable accuracy when calculating the velocity change of vehicles involved in a collision but require significant modification to accommodate oblique and low-velocity collision events. The existing algorythms require vehicle-specific structural stiffness coefficients for each colliding vehicle, determined from full-scale impact testing. The current database of vehicle structural stiffness values is generated mainly through government safety standard compliance testing and is quite extensive for frontal impacts involving passenger cars and many light trucks and SUVs. However, the database is devoid of specific crash testing necessary for deformation analysis of rear and side structures of many vehicles. Additionally, there remains a dearth of structural stiffness coefficients for heavy commercial vehicles, buses, recreational vehicles, heavy equipment and motorcycles, rendering the application of the current force-deflection analysis approach useless for many impacts involving such vehicles.

The research presented, known as the Generalized Deformation and Total Velocity Change System of Equations, or G-DaTAΔV™, develops an accurate, reliable and broadly-applicable system of equations requiring knowledge of the structural stiffness coefficients for only one vehicle, rather than both vehicles involved in a collision event, regardless of the impacted surfaces of the vehicle. The developed methodology is inclusive of non-passenger vehicles such as commercial vehicles and even motorcycles, and it also accommodates impacts with objects and surfaces not supported by the current structural stiffness coefficient database. The G-DaTAΔV™ system of equations incorporates the linear and rotational collision contributions resulting from conservative forces acting during the impact event. The contributions of the G-DaTAΔV™ system of equations are as follows:

1. Consideration of non-conservative contributions from tire-ground forces and inter-vehicular frictional energy dissipation commonly present during non-central collision configurations.

2. Ability to solve for collision energy of a two-vehicle system using a single structural stiffness for only one of the colliding vehicles using work/energy principles.

3. Determination of the total velocity change for a vehicle resulting from a given impact event, which results from conservative and non-conservative force contributions.

4. The ability to predict the time period to reach maximum force application during an impact event, allowing for the determination of the peak acceleration levels acting on each vehicle during an impact.

The results of applying the G-DaTAΔV™ to full-scale impact tests conducted as part of the RICSAC collision research will be presented. Additionally, analysis of real-world collision data obtained through the National Automotive Sampling System demonstrates a close correlation with the collision values recorded by the vehicle event data recorders (EDRs) as part of the supplemental restraint system airbag control moducles (ACM). Compared to other analysis methods currently used, determining the total velocity change of a vehicle due to a collision event is achieved with a higher level of both accuracy and precision when using the G-DaTAΔV™.

The generalized approach of the G-DaTAΔV™ applies to collisions ranging from the simple collinear impact configuration to the most rigorous conditions of offset and oblique impacts. The comprehensive formulation provides greater utility to the researcher or forensic analyst in determining the contributions of the vehicle-roadway-driver environment as it relates to real-world collision events and their effects on vehicle and highway safety.

Commentary by Dr. Valentin Fuster
2016;():V014T14A013. doi:10.1115/IMECE2016-65169.

Neck and back loads of sit down forklift operators have not been fully evaluated in the scientific literature. In this study, we evaluate the neck and back loads of an obese forklift operator who experiences a sudden vertical drop while operating a sit down lift truck. A ballasted 50th percentile male anthropomorphic test device (ATD) was used to measure loads available to a sit down forklift operator. Telemetry was used to remotely operate the sit down lift truck with the ATD properly belted. The belted ATD and lift truck were traveling, forks-leading along a stationary flatbed trailer when the right front forklift tire dropped into a defect in the floor. Several runs were performed at forklift travel speeds less than 5 miles per hour (2.2 meters per second). Back loads of the ATD were compared to activities of daily living (ADLs); and neck and back loads of the ATD were compared to published human tolerance levels and Injury Assessment Reference Values (IARVs) used in compliance testing. Review of ADLs, IARVs, and tolerance data show little correlation between the potential for spinal injury and experiencing a sudden drop while operating a sit down lift truck.

Topics: Stress , Forklifts
Commentary by Dr. Valentin Fuster
2016;():V014T14A014. doi:10.1115/IMECE2016-65349.

This paper investigates the combined effects of specific impact direction and impact location on the serious-to-maximum (AIS3–6) thoracic injuries of drivers in frontal impacts based on the 1995–2009 data from the United States Department of Transportation (US DOT) National Automotive Sampling System/Crashworthiness Data System (NASS/CDS). The selected sample is limited to three impact locations near the driver side (distributed, offset and corner) and two impact directions (pure frontal and oblique) treated as the frontal direction, resulting in a total of six crash configurations. The risks of thoracic injury for drivers in all frontal crash configurations are evaluated. The relative risks with 95% confidence intervals are calculated. Binary logistic regressions are fitted to the datasets for further examination of the effects of impact direction and impact location on the serious-to-maximum thoracic injuries. Occupant characteristics and crash severity are also included as explanatory variables. Overall, impact location and impact direction have considerable influences on thoracic injury pattern and severity for drivers. For distributed and corner deformation, oblique loading is approximately 3 times more likely to lead to thoracic injures than pure frontal loading. Conversely, the relative risk is 3.44 for offset deformation, which indicates that, for this impact location, frontal impact is more associated thoracic injuries compared to oblique impact. The effects of impact location and impact direction on serious-to-maximum injuries for three types of anatomical structures (organ, skeletal and vessel) are assessed as well. In addition to crash related variables (impact location and impact direction), results of the binary logistic regressions also indicate that crash severity (OR, 7.67–81.35) and occupant characteristics, including age (OR, 4.80–20.83), gender (OR, 1.16) and BMI (OR, 1.81), significantly affect the risks of thoracic injuries in frontal motor vehicle collisions.

Commentary by Dr. Valentin Fuster
2016;():V014T14A015. doi:10.1115/IMECE2016-66021.

A series of vehicle-to-pedestrian sideswipe impacts were computationally reconstructed; a fast-walking pedestrian was collided laterally with the side of a moving vehicle at 25 or 40 km/h, which resulted in rotating the pedestrian’s body axially. Because of a limited interaction between the human body and striking vehicle, the struck pedestrian was projected transversely from the vehicle and fell to the ground close to the first impact point. Potential severity of traumatic brain injury (TBI) was assessed using linear and rotational acceleration pulses applied to the head and by measuring intracranial brain tissue deformation. We found that TBI risk due to a secondary head strike with the ground can be much greater than that due to a primary head strike with the vehicle. Further, an ‘effective’ head mass, meff, was computed based upon the impulse and velocity change involved in the secondary head strike, which mostly exceeded the mass of the adult head-form impactor (4.5 kg) commonly used for a current regulatory impact test for pedestrian safety assessment. Our results suggest that TBI risk due to a ground impact would be mitigated by actively controlling meff, because meff is closely associated with a pedestrian’s landing style in the final event of ground contact.

Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: Safety Management

2016;():V014T14A016. doi:10.1115/IMECE2016-66050.

In this paper, social network analysis method is applied to verify that daily life social relation is an important factor affecting individual and group behaviors and evacuation efficiency. Evacuation experiments are conducted by carrying out 15 multi-mode collaborative evacuation drills including 6 in 2014 and 9 in 2015.The same group of 30 evacuees from a undergraduate class has been tracked for two years. Social network method is used to study following behavior, leading behavior and decision-making behavior in evacuation experiments. Through the questionnaires before and after all the evacuation drills, by defining the mutual trust degree (MTD) and being followed degree (BFD), the MTD relation matrix on daily life and the BFD relation matrix in evacuation can be constructed. Then normal social relation network graph and following relation network graph in emergency are drawn. Results show that small groups and leaders will be formed automatically during group evacuation, and the leaders in emergency demonstrate some certain stability, which has some relevance with gender and relations in normal situation. When two emergency exits are available during evacuation including stair and elevator, evacuation efficiency is best when the elevator and stair are located at the same side.

Commentary by Dr. Valentin Fuster
2016;():V014T14A017. doi:10.1115/IMECE2016-66284.

Elevator evacuation has been considered in high-rise building evacuation in the world, especially in China. Elevator safety has been widely studied for this purpose and technical standards are also available in different countries. However, it is critical to understand human behaviors in elevator evacuation before elevators can be used in building evacuation. It is expected that social relation (family, friend, classmate, etc.) will play an important role on evacuation behaviors. However, researches are largely missing on social relation and its impacts on movement and behaviors of the evacuees. This paper aims to investigate the crowd evacuation considering social relation. An evacuation experiment is conducted in a 11-storey office building. Participants who take part in the experiment include individuals, families and lovers. Evacuation behaviors especially decision-making as well as important factors affecting evacuees’ choices are discussed. Movement characteristics of evacuees in the stair are also analyzed. It is concluded that family members will take actions, such as take elevators or stairs, together. Females and evacuees in poor condition prefer to take the elevator during evacuation. Many pairs or small groups may be formed owing to social relations. The groups take more time to make decision. The members in small groups may block the traffic and slow down the speed of the crowd. Evacuation efficiency changes greatly considering small group behaviors and social relations. Experimental results are helpful for determining the effective rules and regulations in elevator evacuation in high-rise buildings.

Topics: Evacuations
Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: Safety, Risk and Reliability (General)

2016;():V014T14A018. doi:10.1115/IMECE2016-65087.

Steam turbine mechanical breakdowns dominate equipment losses in the Power-Gen and Forest Product industries. As steam turbines are likely custom-built, variations in design, operation and maintenance practices across different industries could result in different levels of significance of the loss drivers. The present study focuses on comparing the turbine loss drivers and effective condition monitoring for loss mitigation in both industries. Steam turbine loss events from the two industries during a recent 10-year period were first reviewed and classified into typical turbine loss scenarios. Contributions of each loss scenario to the total loss count and value were summarized and compared across the two industries. Subsequently, applicable turbine condition monitoring methods were identified for each loss scenario, and evaluated with expert domain knowledge and available loss data. These monitoring methods were finally prioritized according to their functional effectiveness in turbine loss reduction.

Commentary by Dr. Valentin Fuster
2016;():V014T14A019. doi:10.1115/IMECE2016-65095.

Vehicle door latch minimum force capability testing presently utilizes uniaxial quasi-static loading conditions created toward the middle of the last century. Current technology enables more sophisticated virtual testing of a broad range of systems. Door latch failures have been observed in vehicles under a variety of conditions. Typically these conditions involve multiple axis loading conditions. The loading conditions presented during rollovers on passenger vehicle side door latches are not currently evaluated. Background on these conditions is reviewed. Rollover crash test results, rollover crashes and physical FMVSS 206 latch testing are reviewed. In this paper, the creation and validation of a passenger vehicle door latch model is described. The multi-axis loading conditions observed in virtual rollover testing at the latch location are characterized. These loads are then applied to the virtual testing of a latch in both the secondary and primary latch positions. The results are then compared with crash test and real world rollover results for the same latch. The results indicate that while a door latch in the secondary latch position may meet minimum existing uniaxial horizontal plane loading requirements, the incorporation of multi-axis loading conditions may result in failure of the latch to accomplish its intended purpose at loads substantially below the FMVSS 206 uniaxial failure loads. The findings suggest the need for reexamining the relevance of existing door latch testing practices in light of the prevalence of rollover impacts and other impact conditions in today’s vehicle fleet environment.

Commentary by Dr. Valentin Fuster
2016;():V014T14A020. doi:10.1115/IMECE2016-65184.

Fourier transform based frequency representation makes an underlying assumption of stationarity and linearity for the target signal whose spectrum is to be computed, and thus it is unable to track time varying characteristics of non-stationary signals that also widely exist in the physical world. Time-frequency representation (TFR) is a technique to reveal useful information included in the signals, and thus the TFR methods are very attractive to the scientific and engineering world. Local mean decomposition (LMD) is a TFR technique used in many fields, e.g. machinery fault diagnosis. Similar to Hilbert-Huang transform, it is an alternative approach to demodulate amplitude-modulation (AM) and frequency-modulation (FM) signals into a set of components, each of which is the product of an instantaneous envelope signal and a pure FM signal. TFR can then be derived by the instantaneous envelope signal and the pure FM signal. However, LMD based TFR technique still has two limitations, i.e. the end effect and the mode mixing problems. Solutions for the two limitations greatly depend on three critical parameters of LMD that are boundary condition, envelope estimation, and sifting stopping criterion. Most reported studies aiming to improve performance of LMD have focused on only one parameter a time, and thus they ignore the fact that the three parameters are not independent to each other, and all of them are needed to address the end effect and the mode mixing problems in LMD. In this paper, a robust optimization approach is proposed to improve performance of LMD through an integrated framework of parameter selection in terms of boundary condition, envelope estimation, and sifting stopping criterion. The proposed optimization approach includes three components. First, the mirror extending method is employed to deal with the boundary condition problem. Second, moving average is used as the smooth algorithm for envelope estimation of local mean and local magnitude in LMD. The fixed subset size is the only parameter that usually needs to be predefined with a prior knowledge. In this step, a self-adaptive method based on the statistics theory is proposed to automatically determine a fixed subset size of moving average for accurate envelope estimation. Third, based on the first and the second steps, a soft sifting stopping criterion is proposed to enable LMD to achieve a self-adaptive stop for each sifting process. In this last step, we define an objective function that considers both global and local characteristics of a target signal. Based on the objective function, a heuristic mechanism is proposed to automatically determine the optimal number of sifting iterations in the sifting process. Finally, numerical simulation results show the effectiveness of the robust LMD in terms of mining time-frequency representation information.

Commentary by Dr. Valentin Fuster
2016;():V014T14A021. doi:10.1115/IMECE2016-65382.

This paper presents a simple approach to evaluate risk to transformer units from geomagnetic storms. A simple fuzzy logic based approach was used to develop the model which is capable of categorizing transformers in a fleet into three risk categories, i.e., high, medium and low. This model may be used as a first screening step to evaluate a fleet of transformers without conducting time consuming simulations and studies. Critical factors that affect geomagnetically induced current (GIC) flow are used as input parameters to the model. These factors include location-specific, equipment-specific and geomagnetic storm event-specific factors. Location-specific factors include geomagnetic latitude of the location, earth conductivity structure at location and distance of location from coast. Equipment-specific factors include transformer rating and age. The storm event-specific factor of geoelectric field strength was used which is an indication of the return period of the geomagnetic disturbance event. The paper describes the implementation of this model to evaluate fleet risk for a 1 in 100 year event and the Carrington event (largest recorded geomagnetic storm in history). The fuzzy logic membership functions for the inputs are described in detail and the performance of the fuzzy logic model is evaluated.

Topics: Fuzzy logic , Storms , Risk
Commentary by Dr. Valentin Fuster
2016;():V014T14A022. doi:10.1115/IMECE2016-65463.

The planar linkage mechanism (PLM) is a kind of familiar mechanism. The accuracy of its kinematic parameters, such as displacement or velocity, is crucial to accomplish its function. The parameters are not always precise due to the indeterminacy of some influencing factors. This paper proposes a methodology to analyze the velocity reliability of PLM based on the equal-effective mechanics model (EMM), with the effect of joint clearance, component size randomness and applied force randomness considered. More specifically, how these factors would influence the EMM is studied and then the motion law and risk velocity point of certain part can be obtained. To model the joint clearance exactly, the concept of effective length is recommend. The Response Surface Method (RSM) is used to express the relationship between input parameters and output parameter (velocity), based on which the reliability analysis is conducted. The sensitivities of the parameters are expressed by Sobol variance analyzing method. Finally a certain aircraft cabin door retractable mechanism (one sort of complex planar linkage mechanism) is selected as an example, showing that the approach proposed this article has a great value to be used for engineering object.

Commentary by Dr. Valentin Fuster
2016;():V014T14A023. doi:10.1115/IMECE2016-65504.

A typical procedure for a remnant fatigue life (RFL) assessment is stated in the BS-7910 standard. The aforementioned standard provides two different methodologies for estimating RFL; these are: the S-N curve approach and the crack growth laws (i.e. using Linear Elastic Fracture Mechanics (LEFM) principles) approach. Due to its higher accuracy, the latter approach is more commonly used for RFL assessment in the offshore industry. Nevertheless, accurate prediction of RFL using the deterministic LEFM approach (stated in BS-7910) is a challenging task, as RFL prediction is afflicted with a high number of uncertainties. Furthermore, BS-7910 does not provide any recommendation in regard to handling the uncertainty in the deterministic RFL assessment process. The most common way of dealing with the aforementioned uncertainty is to employ Probabilistic Crack Growth (PCG) models for estimating the RFL. This manuscript explains the procedure for addressing the uncertainty in the RFL assessment of process piping with the help of a numerical example. The numerically obtained RFL estimate is used to demonstrate a calculation of inspection interval.

Commentary by Dr. Valentin Fuster
2016;():V014T14A024. doi:10.1115/IMECE2016-66619.

The gear door lock system (GDLS) is a mechanism with multi-failure modes. It often leads to a large error when ignoring the correlation between failure modes. Copula theory in this work is applied to research the correlation between two failure modes. A dynamic simulation model is constructed to acquire the joint statistics characteristics of the two failure modes. Gumbel and Clayton Copula functions are chosen to establish the reliability model respectively, which shows that the single Copula model cannot fully capture the tail dependence. To solve the problem, a mixed Copula function is constructed. The minimum squared Euclidean distance is adopted to estimate the parameters of the reliability models. It shows that the result of mixed Copula model is the closest to that of the Monte Carlo simulation, and it presents the relative error is at least 46% lower than the single Copula.

Commentary by Dr. Valentin Fuster
2016;():V014T14A025. doi:10.1115/IMECE2016-67151.

Given the rapidly proliferating varieties of nanomaterials and ongoing concerns that these novel materials may pose emerging occupational and environmental risks, combined with the possibility that each variety might pose a different unique risk due to the unique combination of material properties, researchers and regulators have been searching for methods to identify hazards and prioritize materials for further testing. While several screening tests and toxic risk models have been proposed, most have relied on cellular-level in vitro data. This foundation enables answers to be developed quickly for any material, but it is yet unclear how this information may translate to more realistic exposure scenarios in people or other more complex animals. A quantitative evaluation of these models or at least the inputs variables to these models in the context of rodent or human health outcomes is necessary before their classifications may be believed for the purposes of risk prioritization. This paper presents the results of a machine learning enabled meta-analysis of animal studies attempting to use significant descriptors from in vitro nanomaterial risk models to predict the relative toxicity of nanomaterials following pulmonary exposures in rodents. A series of highly non-linear random forest models (each made up of an ensemble of 1,000 regression tree models) were created to assess the maximum possible information value of the in vitro risk models and related methods of describing nanomaterial variants and their toxicity in rat and mouse experiments. The variety of chemical descriptors or quantitative chemical property measurements such as bond strength, surface charge, and dissolution potential, while important in describing observed differences with in vitro experiments, proved to provide little indication of the relative magnitude of inflammation in rodents (explained variance amounted to less than 32%). Important factors in predicting rodent pulmonary inflammation such as primary particle size and chemical type demonstrate that there are critical differences between these two toxicity assays that cannot be captured by a series of in vitro tests alone. Predictive models relying primarily on these descriptors alone explained more than 62% of the variance of the short term in vivo toxicity results. This means that existing proposed nanomaterial toxicity screening methods are inadequate as they currently stand, and either the community must be content with the slower and more expensive animal testing to evaluate nanomaterial risks, or further conceptual development of improved alternative in vitro screening methodologies is necessary before manufacturers and regulators can rely on them to promote safer use of nanotechnology.

Commentary by Dr. Valentin Fuster
2016;():V014T14A026. doi:10.1115/IMECE2016-67213.

This study looked to determine the safety of air cannons used at public events based on experimentally collected ballistic data. Specifically, the probable injuries to bystanders resulting from being hit by various projectiles launched from air cannons were investigated. Due to the rapid deceleration of projectiles fired from air cannons as they travel through the air, this study focuses on the worst case scenario: point blank impacts. Based on data collected using a chronograph and force plate, this study asserts that it is likely an air cannon operating under the conditions of this experiment can cause significant ocular, maxillofacial, laryngeal, and extremity injuries. To mitigate the risks posed by using air cannons, this study recommends the use of safety glasses for operators, mandatory operator training, automatic trigger locking mechanisms, frequent inspections of the cannon, regulations on the projectiles that can be fired, and the establishment of a minimum firing distance between the operator and bystanders.

Commentary by Dr. Valentin Fuster
2016;():V014T14A027. doi:10.1115/IMECE2016-67331.

Agriculture has been considered one of the most hazardous industries in the U.S., with studies showing that the worker fatality rate in agriculture is over seven times higher in 2011 than the fatality rate for all private industry workers. According to the U.S. Department of Labor Occupational Safety and Health Administration (OSHA), many of the fatalities and injuries that occur each year are preventable based on the use of protective equipment. Hazards associated with agricultural equipment such as farm tractors have been known for a number of years, and safety features have been introduced to mitigate, or in some cases, eliminate hazards associated with operation of this type of equipment. This paper presents a historical analysis of fatal and nonfatal injury data to identify potential effects of these safety features once introduced. The risks agricultural workers face, with an emphasis on hazards presented by farm equipment, is identified and quantified from data in recent years, specifically for farm tractors. For context, an introduction to the regulations and industry standards relevant to agricultural equipment is given, including the introduction of certain safety features such as roll-over protective structures (ROPS), which have been an industry standard requirement on tractors manufactured since the mid-1980s. Overall, recent data show continual reductions in the number of fatal injuries in the agricultural industry, particularly for farm tractors. However, further research is needed to clearly correlate the specific effects of safety mitigation devices on injuries associated with equipment in this industry.

Commentary by Dr. Valentin Fuster
2016;():V014T14A028. doi:10.1115/IMECE2016-67718.

Each year hundreds of ride-on mower operators are injured or killed as the result of rollover accidents. Several studies have indicated that rollover accidents are the most prominent type of fatal ride-on mower accident. Rollover accidents have occurred during actual mowing operations as well as during transport or loading onto transport equipment. The mowers themselves are designed and marketed for either commercial use or consumer use. However, commercial mowers are routinely sold to and used by ordinary consumers as there are no restrictions on such sales.

The mechanism of injury is typically the weight of the machine pinning the operator, or personal contact with a rotating cutting blade. Even relatively light-weight mowing machines have caused deaths due to asphyxiation or drowning when an operator was pinned under the machine. Operators that avoid being trapped under the weight of the machine may come into contact with a rotating cutting blade and receive a serious injury, such as an amputation.

This paper presents a brief history of industry safety standards applicable to either consumer or commercial mowing machines. A discussion is presented of available accident reports and statistics. Sources of the data reviewed include the Consumer Product Safety Commission, the Occupational Health and Safety Administration and the author’s personal investigations. Several common rollover accident modes have been identified.

While it is not possible to design a ride-on mower that could not roll over, design considerations should be utilized to minimize the propensity of the machine to roll and to minimize injuries should the machine roll over. This paper describes these design considerations.

Topics: Accidents , Design
Commentary by Dr. Valentin Fuster

Safety Engineering and Risk Analysis: User-Driven Innovation and Management

2016;():V014T14A029. doi:10.1115/IMECE2016-65181.

The growing need of design of safety guards for industrial workers led to the need for experimentation in the field of ballistics, typically used in the military research.

In the last few years some international standards for the safety of machine tools have been developed, such as the ISO 23125: 2010, improving the ballistic protection of safety guards. But it is still possible to find in the market a large quantity of machine tools with doubtful real protective characteristics of guards.

The uncontrolled projection of parts of work piece or tools can often cause very dangerous perforations of the safety guards. In such a way specific experimental tests like the ones conducted in EU, have assured the possibility to write appendices of ISO standards for safety guards design of machine tools.

These tests are based on impact between a particular standardized projectile, which exemplifies an impacting fragment of variable size and energy, and a flat plate placed in the trajectory of the projectile. The penetration or buckling of the target determines the non-suitability of a particular material of a given thickness, for the design and production of safety guards.

However these tests, have the following limitations: they are valid only for: a limited type of thickness and materials, a perpendicular impact with flat plates of about 500 mm × 500 mm and when the standardized penetrator is a cylinder with a prismatic head.

Moreover another limitation arises for the design of real safety guards: difficulties in taking into account curved design of guards such as the ones typically used in the spindles of machine tools. Moreover it is very difficult to take into account innovative materials different from the ones provided by the standards and also it is impossible to consider projected objects whose geometry is not regular, for example fragmented parts of tools, broken as a result of a wrong manoeuvre of the machine user.

The main focus of this paper is to test the applicability of numerical methods for the simulation of impacts on steel sheets of standardized penetrators for the numerical design and validation of industrial safety guards.

Correlation between experimental penetration tests found in international papers and optimized numerical tests will be presented.

Commentary by Dr. Valentin Fuster
2016;():V014T14A030. doi:10.1115/IMECE2016-65565.

More than four decades has passed since introducing safety standards for Impact Attenuation Surfacing (IAS) used in children’s playgrounds. Falls in children’s playgrounds is a major source of injuries and IAS is one of the best safety interventions deployed to reduce the incidence and severity of these injuries. Currently there are two criteria that measure the injury prevention performance of IAS, namely: Head Injury Criteria (HIC); and maximum acceleration (Gmax). Based on the ASTM playground safety standard F1292 the thresholds of HIC and Gmax are 1000 and 200g respectively. If the playground IAS complies with this Standard the number and severity of fall-related injuries in playgrounds should be decreased. However after implementing these standards a high number of children continue to be hospitalized due to fall-related playground injuries In this paper we tested ten samples based on ASTM F1292 standard to propose the introduction of an additional criterion to HIC and Gmax which can filter and remove hazardous IAS that technically comply with the current 1000 HIC and 200g safety thresholds. The proposed criterion is called the impulse force criterion (If) and combines the change of the momentum and the impact duration. The Gmax, HIC and If results are presented graphically and numerically.

Commentary by Dr. Valentin Fuster
2016;():V014T14A031. doi:10.1115/IMECE2016-65942.

The Occupational Safety and Health Administration (OSHA) in the United States is responsible for the promulgation and enforcement of rules to protect and enhance worker safety in most medium and large commercial enterprises. To that end, the agency has collected and processed more than 240,000 atmospheric samples of chemicals and aerosols in a variety of workplaces in the past 30 years. Though the agency spends more than $500 million per year even in the face of increasing overall employment, there exist only targeted evaluations of OSHA sampling activity for specific issues like formaldehyde or silica in the published literature.

This paper presents a comprehensive analysis of this effort including assessment of the hazard potential distribution of sampled workplace atmospheres for all recorded pollutants over the time period from 1984 to 2011, the budgetary requirements of this activity over time in comparison to the assessed risk, and an evaluation of the probable effectiveness of such activity given changes in US industrial employment over that time period. The effectiveness of the sampling program is assessed according to specific criteria including the probability of detecting exceedances of the National Institute of Occupational Safety and Health (NIOSH) recommended exposure limit (REL) for individual pollutants, the trend in the overall hazard level of detected atmospheres, the coverage of industries by worker population, and the cost-efficiency of the program in identifying hazardous atmospheres. Special attention is given to lead, toluene, and various mineral- and metal-based particulate matter, which have all seen new rules implemented in the recent past.

Findings show that the number of samples per employed person has decreased markedly since the beginning of the study period and become less aligned with the changes in population distribution among US regions, however the probability of detecting a hazardous level of a chemical or aerosol pollutant has increased. Extrapolations of this information and the associated changes in industrial sector employment indicate that US workplace atmospheres are marginally less hazardous at the end of the study period than they were at the beginning.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In