ASME Conference Presenter Attendance Policy and Archival Proceedings

2016;():V011T00A001. doi:10.1115/IMECE2016-NS11.

This online compilation of papers from the ASME 2016 International Mechanical Engineering Congress and Exposition (IMECE2016) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Applying a Social Context to Design

2016;():V011T15A001. doi:10.1115/IMECE2016-65120.

The cassowary, a large flightless bird native to Australia, is best known for the helmet-like casque on its head. This casque looks like a large bony fin, but actually is a hollow structure made of keratin. Because the cassowary uses this casque to ram trees to knock down fruit, it is thought that the structure might provide much of the shock-absorbing qualities needed to keep the head safe. This analysis is a first look into this possibility.

Commentary by Dr. Valentin Fuster
2016;():V011T15A002. doi:10.1115/IMECE2016-66818.

The ultimate goal of most design projects or endeavors should be to create a product with high quality as it typically leads to higher customer satisfaction and brand retention. Product design teams are usually comprised of a group of engineers with varying backgrounds, personalities, and motivational drives. This paper presents an initial study on how motivation of individuals affects the quality of their resulting designs. The ultimate goal of this research is to identify factors — such as motivational factors — that may prove useful to forming the most effective design teams. Initial data for this study stems from a senior level capstone design course in a mechanical engineering program and takes the form of a design quality assessment; and one survey instrument that assesses the 6 distinguishing qualities of serious leisure, and in particular, its motivations and benefits. Design quality is measured by a group of engineering faculty and industry representatives utilizing a proposed design quality rubric which scrutinizes factors such as customer satisfaction, manufacturability, and product fit and finish. Motivational factors are measured using the Serious Leisure Inventory and Measure (SLIM) short form, a 9-point Likert style questionnaire. The goal of this research is to identify teaming strategies such that a group of designers will achieve the level of design quality desired of a specific product or project. Findings in this study indicate that teams, comprised of individuals largely motivated toward design-focused leisure, or conversely demotivated by personal aspects, tend to realize better design quality outcomes.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A003. doi:10.1115/IMECE2016-67127.

New, experimental, synchronous, multi-user CAD systems, such as BYU’s NXConnect, allow users to work together simultaneously in the same CAD model. Although multi-user CAD comes close to approximating the old collaborative drafting table experience, NXConnect still falls short in a few key areas, most notably of which is in collaboration awareness. This results in redundant work and lost time. Investigation was done with other multi-user software to see what features help each user to be aware of what is being worked on. This resulted in a proposed plugin for NXConnect consisting of two elements: a preview-based real-time feature update and a temporary plane indicating where a user is creating a sketch. Teams using NXConnect with and without the plugin were studied. Teams using the plugin found a small improvement in working collaboratively.

Commentary by Dr. Valentin Fuster
2016;():V011T15A004. doi:10.1115/IMECE2016-67139.

A transformative research paradigm is imbedded in knowledge mobilization processes involving close collaboration between researchers and the community. The research presents the development of an integrated, connected food ecosystem that, because of its fundamental design and use of appropriate, smart technology, which tends to naturally create inclusion and prosperity opportunities for many and not simply for the few. The research relies on multi-stakeholder participation to develop appropriate technologies to enhance economic activity amongst unemployed youths in Johannesburg, South Africa. A human-centered, systems engineering approach to develop a pilot project that promotes integrated, online, technologically supported food system is presented. The research is also concerned with how to measure the impact of the intervention the on food resilience as a result of urban farming. This paper presents the systems analysis of the current local food network and the proposed integrated solutions for a pilot project to establish a minimal viable project that can be tested. The research describes the planning and implementation of a pilot project as a minimal viable product to test in the market.

Topics: Design
Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: CAD, CAM and CAE Design

2016;():V011T15A005. doi:10.1115/IMECE2016-66502.

Simulation of wear for a typical revolute joint under periodic load of a gear door lock is implemented with a three-dimension finite element model based on Archard’s wear law. Both the wear of the journal and the shaft of different materials are taken into account. The load and motion of the joint are acquired by the dynamic solution of the mechanism as the inputs of wear simulation. Displacements of the nodes experiencing wear are governed by the user subroutine UMESHMOTION in the finite-element code ABAQUS by which the direction of wear and wear depth are calculated. ALE adaptive meshing technology has been used to maintain a high-quality mesh. An effective adaptive extrapolation method is presented to minimize computational time and keep the accuracy by the processor algorithm. In the last part, the influence of clearances on the simulation of wear is investigated with the developed model.

Commentary by Dr. Valentin Fuster
2016;():V011T15A006. doi:10.1115/IMECE2016-66948.

Leveraging virtual reality (VR) technology to enhance engineering design reviews has been an area of significant interest for researchers since the advent of modern VR. The ability to interact meaningfully with 3D engineering models in these VR design reviews is an important, though often neglected, capability due to the difficulty of performing data translation between native CAD data and VR compatible file formats. An automated synchronization interface was developed between a VR design review environment and a commercial CAD package that stream-lines the data translation process and enables enhanced visualization and manipulation tools. User experiments were performed to explore the hypothesis that allowing users to perform CAD-like view transformations and geometric manipulations in VR design reviews improves design understanding and decision making. Analysis of the experiment results show that enhanced interaction tools provide statistically significant advantages over a baseline VR design review environment for complex 3D models.

Commentary by Dr. Valentin Fuster
2016;():V011T15A007. doi:10.1115/IMECE2016-67753.

For engineering drawings and CAD definitions, the problem of a suitable datum definition for datum features of circles, spheres, and cylinders has been sought by standards writers over decades. The maximum-inscribed and minimum-circumscribed definitions that have often been used have known problems relating to stability in many common, industrial cases. Examples of these problem cases include cylindrical datum features having an hourglass shape, barrel shape, or the shape of a tapered shaft and circular or spherical datum features that are dimpled. For this cause, many resort to using a least-squares fit whose diameter is scaled to be just inside (or just outside) the datum feature. However, we show this shifted least-squares solution has its own drawbacks.

This paper investigates a new datum definition based on a constrained least-squares criterion. The use of this definition for datum planes has already elegantly solved the problem of providing a full contact solution when that solution is stable, while providing a balanced, stable solution in the case of rocker conditions. With that success as motivation, we now investigate using this definition for circles, spheres, and cylinders.

We demonstrate that the constrained least-squares is an excellent choice for several known problematic cases. This datum definition maintains stability in cases where the maximum-inscribed fits are not unique and thus not stable. Yet they also maintain close adherence to the maximum-inscribed solution when such solutions are stable. We also show that the constrained least-squares solution has clear benefits over the shifted least squares solution.

This is the first computational investigation into the behavior of the constrained least-squares as a possible datum definition for these features. While not being fully comprehensive, these initial findings indicate that the constrained least-squares appears to be a safe and advantageous datum definition choice and provide substantial optimism that results in future investigated cases will be pleasing as well.

Topics: Cylinders
Commentary by Dr. Valentin Fuster
2016;():V011T15A008. doi:10.1115/IMECE2016-67854.

This paper concerns the response of a single-layered strand cable of helical wires with wires-to-core contact under free and constant curvature constrained bending. The stranded cable under static-loading conditions experiences any combination of tension, torsion and bending. A linear finite element model for helical wire strand cable for both bending cases was developed and their bending response for various load steps was analyzed. The responses thus observed were compared with the theoretical prediction reported by the present authors in the literature. The present authors have developed a theoretical model using the thin rod theory and presented a linear stiffness matrix establishing the relationship between the axial, torsional and flexural rigidities and the coupling parameters of the cable.

Topics: Cables
Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Case Studies in Systems, Design and Complexity

2016;():V011T15A009. doi:10.1115/IMECE2016-65661.

Nowadays in many industrial applications, i.e. electrical household appliances, it is necessary to have a robust and safe control for some variables involved in the analysis of the performances of different products. In addition, the recent eco-design directives require products increasingly eco-friendly and eco-efficient, preserving high-performance but a low power consumption. For these reasons, the physical prototypes of products require many expensive and complex tests in term of time, resources and qualified personnel involved. To overcome these limitations, the proposed approach is focused on the use of virtual prototyping tools, which support and reduce the expensive physical experiments.

The main objective of this paper is the development, implementation and testing of an innovative methodology, which could be an improvement for the sustainable design of induction hobs.

Induction heating applied to the domestic cooking has significantly evolved since the first cooking hobs appeared. Different issues such as maximum power available for heating a pot, dimensional compactness of the hobs, or inverter electronics efficiency have achieved a great development.

The proposed methodology provides the development of a multi-physic model which is able to estimate the efficiency of the induction hobs starting from the design data of the project. In particular, the multi-physic model is composed by an electromagnetic simulation and a thermal simulation. The electromagnetic simulation, starting from electrical values such as voltage, current and frequency, is able to simulate the eddy current induced in the bottom of the pot, and resistance leads to the Joulean heating of the material. The thermal simulation is able to measure the energy consumption during the operational phase and the temperature reached by the materials. Therefore, the thermal power obtained by the Joulean heating is, at the same time, the output of the electromagnetic simulation and the input of the thermal one.

The proposed model can be applied to design product and simulate the performance considering different operating conditions such as different types of cookers, different coils and different materials. Through the use of virtual prototyping tools is possible to control the heat flux on the whole system (stove, pot, water), and to evaluate the energy efficiency during the operational phase. The proposed tool makes the product-engineer more aware about decision-making strategies in order to achieve an energy saving, calculated over the whole life cycle.

Commentary by Dr. Valentin Fuster
2016;():V011T15A010. doi:10.1115/IMECE2016-66582.

Servo driven hydraulic power units have been implemented in some sectors of industry in order to counteract rising energy costs and reduce our ecological footprint. The advantages associated with the use of these technologies has motivated us to research a new control approach that allows its use independently, with reduced implementation costs and high efficiency.

This investigation develops new solutions to concurrently implement and improve volumetric control methodology for oil-hydraulic power units, which aims to produce and provide strictly necessary hydraulic power to the actuators. The approach used is based on a balance of flows present in a hydraulic circuit, reducing the pressure ripple generated by the pumps, valves and actuators, using a hydraulic accumulator.

The work begins with the mathematical modeling of a volumetric oil-hydraulic power unit, designed to demonstrate the concepts of the project, its components and the associated advantages. The definitions of the models presented are intended to exemplify the new control strategy and infer about the possibilities that arise from the use of this new methodology for power oil-hydraulic units.

In order to carry out the research and conclude about the results of the simulations, two simulations were performed using MATLAB Simulink software for two distinct hydraulic circuits and their control strategy: resistive control and volume control with the use of a servo motor.

In the resistive control, an internal gear pump driven by an induction motor with constant speed uses a pressure regulating valve to derive the excess of the flow to the reservoir. Despite their low efficiency, this type of assembly has very low costs and has a very good dynamic compared with traditional volumetric drive systems, avoiding the use of dedicated engineering.

The volumetric control makes use of an internal gear pump (to allow direct comparisons with the resistive control method), a servo motor, a hydraulic accumulator and a directional valve which prevent the flow from de accumulator draining into the reservoir during the downtimes. The controller allows you to establish a direct relationship between the accumulator volume and pressure of the hydraulic circuit.

The control methodology discussed throughout this work reveals an alternative volumetric control solution to consider, whether in new equipment or in retrofitting even with the different objectives of existing technologies available in the market.

The simulations allow us to conclude on energy-saving and environmental advantages of the volumetric control system presented, comparing it with existing systems on the market.

Commentary by Dr. Valentin Fuster
2016;():V011T15A011. doi:10.1115/IMECE2016-66998.

Electro-Mechanical device complexity exists in everyday items from cell phones to automobiles to vacuum cleaners. Generally, product complexity is one of the least quantifiable characteristics in the design cycle with arguably some of the greatest implications. A high level of device complexity carries a negative connotation and is usually considered an attribute a designer should attempt to mitigate. Alternatively, a low level of device complexity may induce designers and marketers to question a product’s usefulness. Whether complexity is a necessary aspect of a design or a hindrance needing to be minimized or eliminated, depends upon how complexity is framed. Some instances in literature attempt to measure complexity yet there is no unified measure that captures the complexity of a product or system during design phases or upon product/system realization. Complexity is defined in many ways, at different levels of abstraction, and different stages of design therefore, becoming highly contextual and subjective at best. An established and repeatable methodology for calculating complexity of existing products in the marketplace is necessary. Once a measure of complexity is agreed upon at the post design stage we can look to earlier phases in design to see whether insights are observable. Identifying complexity early in the design cycle is paramount to strategic resource allocation. This study considers the Generalized Complexity Index (GCI) measure put forth by Jacobs [1] and expands upon it to include functional modeling as a key component in determining an indicative complexity metric. Functional modeling is a method used to abstract system or product specifications to a general framework that represents a function based design solution. Complexity metrics are developed at the functional and completed design levels and used for comparison. Thirty common household products retrieved from an online design repository [2] as well as seven senior capstone design projects were evaluated using the GCI. A modification to the GCI equation is proposed and to gain a relative scale of complexity within the data, a ranked complexity metric was developed and utilized. The magnitude of the ranked complexity metric was only indicative of hierarchical status of a product within the data set and therefore is not comparable to GCI values. Though Jacobs GCI worked well in his study, the GCI does not represent a meaningful complexity measure when applied to the data in this study. This study is an initial attempt to apply an independent data set to Jacobs GCI model with perhaps greater implications, with respect to products, that complexity is multifaceted and is not accurately represented by only interconnectedness, multiplicity, and diversity.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A012. doi:10.1115/IMECE2016-67207.

Piping supports and restrains are required to follow the design requirements as mentioned in ASME B&PV Code, Section III, Subsection NF. One of the requirements indicates the necessity of calculating the critical buckling stresses for the members that are subjected to a compressive loading. This paper discusses the prescribed requirements in the Code that specifically address the considerations of the stability and buckling load capacities of linear piping restraints (i.e., struts). The finite element modeling of various strut geometries and the results of the buckling analyses of a slender (slenderness ratio Kl/r greater than or equal to 100) structural members using various finite element solution techniques are presented herein. Specifically, three types of finite element analysis are conducted in an effort to define the critical buckling load for the subject structural member, and include the traditional linear (Eigen value) Euler method; the nonlinear, second order large deformation method; and finally, the nonlinear large deformation method that incorporates nonlinear elastic-plastic material behavior. These techniques are employed for a hollow cylindrical structural member (i.e., a strut assembly) with varying cross sections along its length. Finite element model consists of three dimensional hexahedral elements in combination with beam elements for the general purpose a finite element solver ANSYS. The critical buckling load is calculated in each case, thereby predicting the load at which instability will occur in the structural member. The results obtained from the aforementioned techniques are then compared both numerically and qualitatively with an appropriate explanation of the purpose and usefulness of each particular result with respect to the intent of the ASME B&PV Code, Section III, Subsection NF requirements. The results show significant variations (as expected) based on differences in the assumptions and techniques employed in the respective analyses.

Topics: Pipes , Buckling
Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Design Under Uncertainty

2016;():V011T15A013. doi:10.1115/IMECE2016-65167.

This study suggests an innovative method for analyzing the behavior of hydraulic experimental systems that can be used as a tool for a system design. Design of experimental hydraulic systems can be a complicated challenge. Unlike industrial facilities, where the process is designated to work in a single or a limited narrow range of duty points and therefore classical design methods can be used, in experimental systems, where any selected duty point within a vast range may be needed, the system design may require much more advanced methods. Such an example is presented in this work where modeling of flow characteristics for an experimental hydraulic system is required. In this example, the system should provide any desired water flow rate in the wide range of 10–450 m3/h (an exceptional ratio in industrial facilities) with accuracy of ±2% while maintaining good stability. Review of former studies in this field didn’t reveal any suggested methods for such a design. In addition, standard process and piping software were examined and found unsuitable for this challenge. Therefore, an independent hydraulic model was developed. The modeling method is based on conventional analytical correlations of hydraulic resistance for a fully developed turbulent flow, modeling of centrifugal pump’s Pressure-Flow Curve (PFC) which depends on the frequency of the supplied voltage and by modeling the varying hydraulic resistance of a throttle control valve. This technique enables analytic evaluation of the possible flow rates in the system, to inspect the system stability and to perform a parametric study of the piping characteristics (diameter, valve type etc.) for two design alternatives. The code of the model was written in a free open-source Scilab platform. The results of the analytical model were illustrated on three dimensional plots displaying the system’s flow-rate versus the pumps frequency and the degree of rotation of the throttle control valve. According to these results, the most prominent alternative was chosen for manufacture.

Commentary by Dr. Valentin Fuster
2016;():V011T15A014. doi:10.1115/IMECE2016-66027.

The verification of complex engineering systems from the very early phases of the design process is of primary importance as it directly influences the performance and system functionalities. Traditional design approaches aim at using simulations as a set of tools during the verification process. However, the current trend in the industry is to base the design process on the simulations as a simulation-based design. This perspective will therefore convey the design process towards a verification-driven design process. This paper reviews the definition of design process, engineering design verification process and decision making process. The objectives of the paper aim at describing an innovative verification-based design process using reliability-based stochastic Petri Net approach. The scope of the paper includes aspects of complex system designs and demonstrates a method to verify the concept design against its requirements using a quantitative approach. Finally, the method is applied on a case study and the results as well as future development are discussed in the last section.

Topics: Reliability , Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A015. doi:10.1115/IMECE2016-67714.

Robust design theories and methods have been widely implemented in automobile design. However, most of the existing studies on the robust design for vehicle merely consider the uncertainties in structural sizes of components, while those in weld and material have not been well studied in an integrated manner. This research proposes a robust design framework which can incorporate various sources of uncertainties. Several major steps are included in the proposed framework: uncertainty quantification based on test and maximum likelihood estimation, weld significance analysis and robust design based on metamodel. Uncertainties in material properties and joint strength are quantified by test and statistical approach. Weld significance analysis is employed to find out the welds which will greatly affect the interest performances. Metamodel contributes to improving the efficient of robust design. The proposed framework is successfully demonstrated through a vehicle front structure design optimization. The results show that the implementation of the proposed method which considers uncertainties of material, weld and structural size can effectively reduce the weight of vehicle while satisfying the safety constraints.

Commentary by Dr. Valentin Fuster
2016;():V011T15A016. doi:10.1115/IMECE2016-67918.

In this paper, numerical inverse analysis is used to predict properties of heat generating material by measuring temperature at outer boundary. Accuracy and efficiency of the method is enhanced by using accurate sensitivity information by use of Semi-Analytical Complex Variable Method (CVSAM). Steady state heat transfer analysis with axi-symmetric model was carried out using finite volume method. Temperature obtained from this analysis, is used as input to inverse method. Objective function for the optimization is difference between computed and measured temperature. This function was minimized with Conjugate Gradient Method (CGM). Coefficients obtained from CVSAM were used in gradient based optimization method. The robustness of developed approach was evaluated by adding Gaussian noise these temperature values. Material properties predicted from this method show close agreement with actual values.

Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: General

2016;():V011T15A017. doi:10.1115/IMECE2016-65123.

ISO and AGMA Standards are the most commonly used accurate approaches while designing a gear. But the most important gear design outputs that are module (m) and face width (b) are obtained from both of the approaches always differ from each other even under the same input parameters. Therefore, gear designers require detailed knowledge on the relative comparison of design outputs including cost. And a translation technique using conversion factors in between the standards are demanded as a stated need in the literature. Hence, this paper firstly obtains dimensionless gear rating numbers (GRi) to rate the design results of spur gears determined from both ISO 6336 and ANSI/AGMA 2101-D04 Standards, and then derives correlation equations to generate dimensionless conversion factors (CFs) to convert the design results obtained from ISO to AGMA. The CFs allow designers to move from one standard to another very easily. This enables engineering students and designers to meet the ever changing needs of global market.

Topics: Gears
Commentary by Dr. Valentin Fuster
2016;():V011T15A018. doi:10.1115/IMECE2016-65426.

A simplified design method (SDM) for spur gears is presented. The Hertz contact stress and Lewis root bending stress capacity models for spur gears have been reformulated and formatted into simplified forms. A scheme is suggested for estimating the AGMA J-factor in Lewis root bending stress for spur gears from a single curve for both pinion and gear instead of the conventional two curves. A service load factor is introduced in gear design that accounts for different conventional rated load modifier factors. It represents a magnification factor for the rated load in a gear design problem.

Two design examples are considered for applications of the stress capacity models. In Example 1, the Hertz contact stress of the SDM deviates from AGMA value by 1.95%. The variance in Example 2 between the contact stress of the SDM and FEM is 1.184% while that between SDM and AGMA is 0.09%. The root bending stress of AGMA and SDM for the pinion in Example 1 differs by 1.44% and that for the gear by 6.59%. The difference between the root bending stress of AGMA and SDM for pinion and gear in Example 2 is 0.18%. These examples suggest that the new simplified method gives results that compare very favorably with both AGMA and FEM solutions. The simplified method developed is recommended mainly for preliminary design when quick but reliable solutions are sought.

Topics: Design , Spur gears
Commentary by Dr. Valentin Fuster
2016;():V011T15A019. doi:10.1115/IMECE2016-65820.

Based on space curve meshing theory, in this paper, we present a novel geometric design of a circular arc helical gear mechanism for parallel transmission with convex-concave circular arc profiles. The parameter equations describing the contact curves for both the driving gear and the driven gear were deduced from the space curve meshing equations, and parameter equations for calculating the convex-concave circular arc profiles were established both for internal meshing and external meshing. Furthermore, a formula for the contact ratio was deduced, and the impact factors influencing the contact ratio are discussed. Using the deduced equations, several numerical examples were considered to validate the contact ratio equation. The circular arc helical gear mechanism investigated in this study showed a high gear transmission performance when considering practical applications, such as a pure rolling process, a high contact ratio, and a large comprehensive strength.

Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Optimization

2016;():V011T15A020. doi:10.1115/IMECE2016-65142.

Compliant mechanisms are widely used in the industry and have gained more popularity in the past few decades with the advancements in smart materials and micro-electro mechanical systems (MEMS). Compliant mechanisms offer huge advantages over the classical rigid linkages due to their flexible behavior. Such flexible mechanisms reduce production time and cost especially that they eliminate the need of joints that can get pretty hectic especially at micro level manufacturing and assembly. By avoiding multi-joints in the design and their consequent clearances, a compliant mechanism can offer higher precision over its rigid counterpart. However, these advantages come with a price; compliant mechanisms are more challenging in terms of design and analysis. Many compliant mechanisms are designed to undergo relatively large deflections which in turn impose geometric nonlinearities. In the past, many compliant designs were based on intuition, experience, and trial and error. Later on, many theories developed to assist in designing and analyzing compliant mechanisms before proceeding with the manufacturing phase. This paper covers topology optimization of compliant structures using beam elements. The swarm intelligence technique known as Ant Search (AS) is used to find the optimum design that satisfies the required mechanism performance. A case study that involves the topology design of a miniature compliant displacement amplifier is presented and results are compared with the finite element solver ANSYS. The optimized topology mechanism produced a much larger amplification ratio as compared to that presented in literature. Results produced show the high potential of swarm intelligence and AS in particular at solving multi-disciplinary optimization problems that should not be limited to designs that involve physical paths.

Commentary by Dr. Valentin Fuster
2016;():V011T15A021. doi:10.1115/IMECE2016-65537.

Structural topology optimization seeks to distribute material in a design domain to produce the stiffest structure for a given mass or the lightest structure for a given strength. In the density-based approach to topology optimization, the design domain is divided into small elements and an optimization algorithm determines whether each element in the optimal design contains solid material or void. Solutions obtained using this method may suffer from a variety of issues, such as a checkerboard pattern of solid and void elements, large transition regions between solid and void parts of the structure, and dependence of the final solution on the initial mesh. Typically, these issues are mitigated using filters, projection functions, or a combination of the two. However, applying these techniques requires the user to select a few parameter values and the optimal design strongly depends on the selected parameters.

This work presents an alternative approach to addressing the aforementioned issues in density-based topology optimization. Rather than assigning a separate design variable to each element in the domain, a continuous approximation of the density field is used. This field is interpolated using finite element shape functions with the scaling coefficients of these shape functions acting as design variables in the optimization problem. Although this technique is known to produce an optimal design that is free of checkerboard patterns, it leads to a large transition region at the boundary of the structure whose size depends on the size of the finite elements used. To systematically reduce the size of this transition region, the finite element mesh is locally refined near the structural boundary and the design is optimized again. Because the mesh implicitly controls the size of the transition region, local refinement and optimization continue until the smallest cells in the mesh reach an acceptable resolution. A local refinement indicator is developed to identify and refine cells lying in the transition region. Local isotropic mesh refinement is used to maintain reasonable cell sizes over most of the design domain and, consequently, keep the computational cost of both the finite element analysis and the optimization down. Anisotropic mesh refinement may also be used with a suitable indicator, though it is not demonstrated here.

While both continuous density parametrization and adaptive mesh refinement have been applied independently to problems in topology optimization, this work applies them simultaneously for the first time. Structural designs produced by this method are shown to be free of checkerboard patterns and contain features whose size is largely controlled by the initial coarse mesh. In addition, the boundary can be sharply identified for additional processing, such as translation to a CAD file in preparation for fabrication and manufacturing. A disadvantage of the current method is that small features may emerge in the refined parts of the mesh after multiple refinements. Computations were carried out using open-source finite-element analysis and optimization tools. Results are presented for a pair of well-known two-dimensional topology optimization test problems. While not demonstrated in this work, the methodology can be extended easily to three-dimensional problems.

Commentary by Dr. Valentin Fuster
2016;():V011T15A022. doi:10.1115/IMECE2016-65599.

Multiobjective multidisciplinary optimization supports the development of mechatronic systems. A suitable approach is required for a short calculation time and sufficient results. All aspects of the system model (mechatronic system, cost model and time optimal control problem) are incorporated into one nonlinear optimization model, following the all-at-once approach. The dynamic simulation is discretized in time and optimization variables are introduced for the state at each time step. Formulating the problem in an algebraic modeling language and solving it by the interior point method allows very fast solution times. This enables a fast turnaround time during the preliminary design phase in the product development. Sensitivities of the objective with respect to model parameters and design constraints are generated by the solver and used to guide the modeling and development process. Using these sensitivities, the model can be improved where necessary while keeping the model complexity low by simplifying less important parts. As example an electromechanical actuating system is considered, in which the rotary motion of a motor is converted to a translational movement with a gate-tape gear.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2016;():V011T15A023. doi:10.1115/IMECE2016-65848.

Tolerance allocation is a necessary and important step in product design and development. It involves the assignment of tolerances to different dimensions such that the manufacturing cost is minimum, while maintaining the tolerance stack-up conditions satisfied. Considering the design functional requirements, manufacturing processes, and dimensional and/or geometrical tolerances, the tolerance allocation problem requires intensive computational effort and time. An approach is proposed to reduce the size of the tolerance allocation problem using design of experiments (DOE). Instead of solving the optimization problem for all dimensional tolerances, it is solved for the significant dimensions only and the insignificant dimensional tolerances are set at lower control levels. A Genetic Algorithm is developed and employed to optimize the synthesis problem. A set of benchmark problems are used to test the proposed approach, and results are compared with some standard problems in literature.

Commentary by Dr. Valentin Fuster
2016;():V011T15A024. doi:10.1115/IMECE2016-66536.

Increasing flexibility for energy-intensive industry is gaining more and more importance due to the changing energy market towards volatile energy sources. Flexibility can be achieved by adapting the energy supply processes and integrating new components, e.g. energy storages. Another way to increase flexibility is to optimize the plant operation to exploit the full potential of the industrial plant. This paper presents a concept for optimal plant control for more flexible operation, using a two-stage optimization approach, combining a quadratic and a nonlinear optimization problem formulation. The optimization concept is demonstrated by means of a simple model plant. The input parameters are energy prices and production schedules respectively heat demands to be satisfied by the plant’s energy supply system. The output is the optimal control trajectory for the considered plant components. Four different scenarios, with varying plant configurations, are simulated and the results are discussed in terms of the optimization approach and the impact of different plant configurations.

Topics: Optimization
Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Product and Process Design

2016;():V011T15A025. doi:10.1115/IMECE2016-65108.

This article presents a design method aimed at addressing contradictory requirements during the conceptual design activities or new product development. Of several methods aimed at developing a “good design” (not necessarily solving a contradiction), a general formal method was proposed in the Axiomatic Design Theory (ADT) by NP Suh [1]. ADT views design as a process that translates a set of functional requirements into a set of design parameters through a design matrix. The goal of the axiomatic design is to force a designer to start from scratch and explore the relationship between functions of the product and its design characteristics. Because the design characteristics in this approach are determined from scratch, the contradictions theoretically will be eliminated at a high level, before the design is developed in more detail. The ADT, however, does not offer specific tools to address contradictory requirements.

Hegel’s Logic claims that “there is absolutely nothing whatever in which we cannot and must not point to contradictions“[2]. In this paper, we argue that with the right focus, contradictions can be leveraged to develop a stronger design solution. While contradictory requirements to product characteristics arise in almost every project, most often they are addressed by searching for a useful compromise in a highly iterative procedure. A more efficient approach, presented in this paper, satisfies both sides of a contradictory requirement (at different moments of time, or for different parts of the object, or at different sections of its non-linear characteristic). It is shown that in many cases, the most important step is reframing of the initial problem, which can be done by listing contradictory requirements and indicating to which parts of the object / moments of time/ stages of its life cycle they apply. Once it is done, the solution can often transpire from the reformulated problem statement, or can be generated using a very limited set of separation principles. An additional option, which has not been previously recommended for resolving contradictions, is separation of contradictory requirements in the space of material or object parameters, by selecting non-linear material or device characteristics. For instance, a medical device needs to meet different requirements in different tests: high elasticity (for the kink test) and at the same time high strength (for the burst test). This means that the target material should meet contradictory requirements to a single characteristic, its stress-deformation curve. The contradiction can be resolved using the fact that high elasticity (the kink test) is required at relatively low deformations whereas high strength (the burst test) is required at large deformations. Generally, for selection of a non-linear characteristic, it is proposed to use a morphological table with non-linear characteristics of a material or of similar devices based on different operation principles (such as different I-V curves of a current limiting device). Several case studies dealing with different subject matter applications illustrate the proposed method. The case studies include medical devices (peripherally inserted central catheter, vena cava filter), aerodynamic tractor-trailer devices, current limiting devices. The case studies are based on real life projects that resulted in patented designs.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A026. doi:10.1115/IMECE2016-65121.

Traditional engineering design is a customer-centric approach that focuses on maximizing performance objectives and minimizing costs under resource constraints. This approach may be effective in meeting the needs of a particular customer but may be detrimental for a larger group of people. A more inclusive human-centered design attempts to deal with a broader base of customers extending beyond geographical boundaries. Inadequacy of even this approach is apparent, as only humans are the center of this design paradigm.

Life Centered Design (LCD) is different from traditional design methodology. It accounts for all forms of life by creating beneficial symbiotic relationships between humans and other living matters leading to sustainability. Nature has solved most problems we face in some form, and it continues to inspire humans. In this context, the LCD approach makes perfect sense as a concept. The challenge, however, is to find solutions using the LCD approach. How do engineers, who avoided biology in the first place, identify potential solutions from nature for solving problems in hand?

This paper proposes the use of morphological charts in the early design phase to generate potential solutions. Specifically, the objective is to develop a structured system that will enable industry innovators to correlate everyday engineering functions with those available in nature. By developing a morphological chart with this correlation, engineers and designers can now identify and create life-friendly designs.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A027. doi:10.1115/IMECE2016-66041.

Providing the designers with the relationship between functions and structures is an effective way to accelerate the design process. Currently, the function-structure relationship is usually obtained by theoretically analyzing the transformation from function to structure, such as the FBS model. Though these methods have provided the reasonable explorations of the function-structure relationship, they are complicated and not easy to achieve. Since the function-structure relationship in the existing products also follows the design pattern described by the existing studies, extracting the function-structure relationship from these products instead of the theoretical analysis could also transfer the significant information that what structures could satisfy the required function. Therefore, this paper presents an estimation approach to obtain the probabilistic description of the function-structure relationship in products. First, the product, structure and the functions contained by a product are described with product vector, structure vector and function vector, respectively. Meanwhile, the relationship between them is also defined. Then, a statistical strategy is proposed that treats all the products as the population and the products we gather as the sample, and defines the function-structure relationship as the conditional probability of the appearance of a structure given a function in the gathered products. Afterwards, the maximum likelihood estimation (MLE) is employed to estimate the conditional probability. Compared with other methods, the proposed approach replaces the theoretical analysis with discovering the products, which avoids the complicated modeling and description of the function-structure relationship. In case study, some experiments have been carried out, and a plug-in tool is developed to implement the application of the extracted function-structure relationship in product design. The results have shown the feasibility of the proposed approach and demonstrated the practical value in engineering.

Commentary by Dr. Valentin Fuster
2016;():V011T15A028. doi:10.1115/IMECE2016-66497.

A uniquely configured suspension system, manufactured primarily of lightweight composite materials, is required for the University of Johannesburg’s solar powered race vehicle. For this design to reach successful completion, an assessment framework is introduced that would scrutinise and analyse different stages of the development. The focus of this paper is on the design and development of a prototype composite vehicle suspension system and assessing the framework implemented to control the research and development process of composite components.

Commentary by Dr. Valentin Fuster
2016;():V011T15A029. doi:10.1115/IMECE2016-66653.

The absolute gravitation acceleration (g) is generally measured by observation of a free-falling test mass in a vacuum chamber based on laser interference. Usually the free-falling object trajectory is obtained by timing the zero-crossings of the interference fringe signal. A traditional way to time the zero-crossings is electronic counting method, of which the resolution is limited in principle. In this paper, a fringe signal processing method with multi-sample zero-crossing detection based on Digital Signal Processor (DSP) is proposed and realized for the application in absolute gravimeters. The principle and design of the fringe signal processing method are introduced. The measuring precision is evaluated both theoretically and from numerical software simulations with MATLAB®, and verified by hardware simulated free-falling experiments. The results show that the absolute error of the gravity acceleration measurement introduced by the fringe signal processing method is less than 0.5 μGal (1 μGal = 1×10−8 m/s2), and the impact on the standard deviation is about 2 μGal. This method can effectively reduce the systematic error of the traditional electronic counting method, and satisfy the requirements for precision and portability, especially for field ready absolute gravimeters.

Commentary by Dr. Valentin Fuster
2016;():V011T15A030. doi:10.1115/IMECE2016-66743.

Boom lifts are useful throughout a variety of industries, such as manufacturing, maintenance service, real estate management, and construction. Boom lifts are designed to allow operator mobility at high elevations and are often used as a substitute for traditional ladders, man-baskets on lift trucks and scaffolding. Although boom lifts are very practical and efficient in allowing personnel to work at high elevations and in areas with limited access, several known hazards exist with boom lifts such as falls, machine tipping, crushes, collapse of machine and electrocution.

Although boom lift operator manuals and safety literature discuss the aforementioned hazards, they do not or incompletely discuss the hazard of suddenly released stored energy when stored energy is rapidly converted from potential energy to kinetic energy through the boom to the operator platform.

One example of rapid conversion of potential energy to kinetic energy involves the boom lift driven over a sudden drop off such as a curb. A relatively low drop off can be amplified substantially by the lever arm of the boom, and as a result, the operator platform and operator(s) within, rapidly accelerate. A second example is when the operator platform is snagged on an external structure and continued hydraulic movement builds up potential energy within the boom. The buildup of potential energy can suddenly and unexpectedly release if the platform springs free from entanglement with the structure. Such release results in the boom, the platform, and the operator(s) rapidly accelerating.

During the rapid acceleration experienced in both examples, operators can potentially be and have historically been violently thrown against the railing of the platform, ejected from the platform, and/or crushed by any nearby overhead obstacles.

The purpose of this paper is to address, analytically quantify, and propose engineering solutions to guard against the sudden conversion of potential energy to kinetic energy on boom lifts. This hazard is currently not discussed or incompletely discussed in boom lift operator manuals and safety literature.

Analytical techniques are used to quantify the rapid acceleration experienced by operator platforms and operators upon the sudden conversion of potential to kinetic energy in various scenarios. Further, the principles of safety engineering are utilized to determine methods to eliminate or reduce the frequency and severity of injuries associated with the sudden conversion of potential to kinetic energy on boom lifts.

This engineering and safety engineering analysis demonstrates that the sudden conversion of energy on boom lifts can rapidly accelerate the operator platform and operator(s) within. Further, there are technologically feasible designs that protect operators against the sudden conversion of potential energy to kinetic energy on boom lifts. Such improved, safer designs are more effective at eliminating or reducing the frequency and severity of injuries than simply warning against the hazards.

Topics: Hazards
Commentary by Dr. Valentin Fuster
2016;():V011T15A031. doi:10.1115/IMECE2016-66910.

Assembly time estimation is a key factor in evaluating the performance of the assembly process. The overall goal of this study is to develop an efficient assembly time estimation method by generating the prediction model from an experimental design. This paper proposes a way to divide an assembly operation into four actions which consist of a) part movement, b) part installation, c) secure operations, and d) subassembly rotations. The focus of this paper is to design a time estimation model for the secure operation. To model secure times, a design of experiments is applied to collect experimental data based on the physical assembly experiments performed on products that are representative of common assembly processes. The Box-Behnken design (BBD) is an experiment design to support response surface methodology to interpret and estimate a prediction model for the securing operations. The goal is to use a quadratic model, which contains squared terms and variable interactions, to study the effects of different engineering parameters of securing time. The experiment is focused on individual-operator assembly operations. Various participants perform the experiment on representative product types, including a chainsaw, a lawn mower engine, and an airplane seat. In order to optimize the assembly time with different influence factors, mathematical models were estimated by applying the stepwise regression method in MATLAB. The second-order equations representing the securing time are expressed as functions with six input parameters. The models are trained by using all combinations of required data by the BBD method and predict the hold back data within a 95% confidence interval. Overall, the results indicate that the predicted value found was in good agreement with experimental data, with an Adjusted R-Squared value of 0.769 for estimated securing time. This study also shows that the BBD could be efficiently applied for the assembly time modeling, and provides an economical way to build an assembly time model with a minimum numbers of experiments.

Commentary by Dr. Valentin Fuster
2016;():V011T15A032. doi:10.1115/IMECE2016-67292.

The study illustrated in this paper aims at analyzing the knowledge management issue related to product development. Especially, the focus is on the domains in which Knowledge-based Systems (KBE) and Design Automation (DA) tools could be adopted. In the past various studies, a lot of KBE and DA systems have been developed in multiple fields such as automotive, aerospace, energy, materials and manufacturing: the information treated in these studies are about data relatives to specific design, for example, of automotive engine components, aircraft structures, energy plants, advanced material and manufacturing or assembly lines. In all of these domain the organization and formalization of the knowledge is a critical issue. The adoption of a good strategy to manage data and information relative to products and processes involves benefits in the product development process. Different methodologies are described in literature. The two of the most used are the Object-Oriented (OO) and Ontology Engineering (OE) approaches. The former is one of the most common and adopted in the industrial domain, including a lot of implementations in the recent past years. The latter is more commonly used in other fields, like bio-engineering, used with the scope of management of experimental data; few implementation in industrial engineering have been considered. The article considers a brief description of the state of the art about Knowledge Based Engineering and Ontology Engineering. A case studies will be described and the benefits and disadvantages due to the use of the different methodologies will be discussed.

Commentary by Dr. Valentin Fuster

Systems, Design, and Complexity: Systems and Complexity

2016;():V011T15A033. doi:10.1115/IMECE2016-65667.

Although remarkable progress has been made in the field of explicit knowledge, research about tacit knowledge is still very few. This paper takes up embodied knowledge such as bicycle riding, as one kind of tacit knowledge. As embodied knowledge cannot be articulated and verbalized, it has to be transferred to another person through practice. But how we can acquire embodied knowledge more effectively through practice is still the question at issue.

Indeed, there are works to help a learner to acquire embodied knowledge by showing the videos or through OJT. But since features or control points are not explicit, it is very difficult for a learner to acquire a good sense for judgments and for decisions to cope with the changing situations.

Although there are many approaches to multivariate analysis, there are very few approaches which provide a holistic perspective. In this sense, pattern-based approach is better than other approaches.

This paper points out that pattern-based Recognition Taguchi (RT) approach in Mahalanobis Taguchi System (MTS) is expected to be a very promising and versatile tool to help a learner acquire embodied knowledge because it allows us to take the differences of body behavior from person to person in addition to providing the holistic perspective.

Commentary by Dr. Valentin Fuster
2016;():V011T15A034. doi:10.1115/IMECE2016-66669.

Stewart-Gough platforms have been used as the basis for multiaxial test machines in multiple applications. Their stiffness coupled with the ability to simultaneously create a combined loading (tensile/bending/twisting) of any design enables them to excite material parameters in any conceivable coupling. For engineered materials, whose properties are often nonlinear and nonisotropic, such loadings are necessary to understand the as-built material parameters inherent within these designed systems. However, the design of a Stewart-Gough is nontrivial as the presence of singular configurations is poorly understood. In the proximity of these singular configurations, precision control of the loading applied by the system is difficult to control due to large gradients in the forces generated. This work uses a combination of simulation and surrogate modeling to establish a “map” of the singular configurations of the Stewart-Gough platform. As a result, a “home” location where the system provides a zero loading upon a specimen is found so as to maximize distance from a singular configuration, and a greater understanding of the nature of singularities in parallel robotics structures is obtained.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V011T15A035. doi:10.1115/IMECE2016-67040.

This paper represents a step toward a more complete frame-work of safety analysis early in the design process, specifically during functional modeling. This would be especially useful when designing in a new domain, where many functions have yet to be solved, or for a problem where the functional architecture space is large. In order to effectively analyze the inherent safety of a design only described by its functions and flows, we require some way to simulate it.

As an already-available function failure reasoning tool, Function Failure Identification and Propagation (FFIP) utilizes two distinct system models: a behavioral model, and a functional model. The behavioral model simulates system component behavior, and FFIP maps specific component behaviors to functions in the functional model. We have created a new function-failure reasoning method which generalizes failure behavior directly to functions, by which the engineer can create functional models to simulate the functional failure propagations a system may experience early in the design process without a separate behavioral model.

We give each basis-defined function-flow element a pre-defined behavior consisting of nominal and failure operational modes, and the resultant effect each mode has on its functions connected flows. Flows are represented by a two-variable object reminiscent of a bond from bond graphs: the state of each flow is represented by an effort variable and a flow-rate variable. The functional model may be thought of as a bond graph where each functional element is a state machine. Users can quickly describe functional models with consistent behavior by constructing their models as Python NetworkX graph objects, so that they may quickly model multiple functional architectures of their proposed system. We are implementing the method in Python to be used in conjunction with other function-failure analysis tools.

We also introduce a new method for the inclusion of time in a state machine model, so that dynamic systems may be modeled as fast-evaluating state machines. State machines have no inherent representation of time, while physics-based models simulate along repetitive time steps. We use a more middle-ground pseudo time approach. State transitions may impose a time delay once all of their connected flow conditions are met. Once the entire system model has reached steady state in a timeless sense, the clock is advanced all at once to the first time at which a reported delay is ended. Simulation then resumes in the timeless sense.

We seek to demonstrate this modeling method on an electrical power system functional model used in previous FFIP studies, in order to compare the failure scenario results of an exhaustive fault combination experiment with similar results using the FFIP method.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In