0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2016;():V01AT00A001. doi:10.1115/DETC2016-NS1A.
FREE TO VIEW

This online compilation of papers from the ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2016) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: Advanced Modeling and Simulation (AMS General)

2016;():V01AT02A001. doi:10.1115/DETC2016-59027.

This article proposes the use of polytopes in HV-description to solve tolerance analysis problems. Polytopes are defined by a finite set of half-spaces representing geometric, contact or functional specifications. However, the list of the vertices of the poly-topes are useful for computing other operations as Minkowski sums. Then, this paper proposes a truncation algorithm to obtain the V-description of polytopes in ℝn from its H-description. It is detailed how intersections of polytopes can be calculated by means of the truncation algorithm. Minkowski sums as well can be computed using this algorithm making use of the duality property of polytopes. Therefore, a Minkowski sum can be calculated intersecting some half-spaces in the dual space. Finally, the approach based on HV-polytopes is illustrated by the tolerance analysis of a real industrial case using the open source software PolitoCAT and politopix.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A002. doi:10.1115/DETC2016-59046.

There are many real life processes whose smart control requires processing context information. Though the issue of processing varying context information has been addressed in the literature, domain independent solutions that can support reasoning and decision making according to time-varying process scenarios in multiple application fields are scarce. This paper proposes a method for dynamic context computation concerning spatial and attributive information. Context is interpreted as a body of information dynamically created by a pattern of entities and relationships over a history of situations. Time is conceived as a causative force capable of changing situations, and acting on people and objects. The invariant and variant spatial information is captured by a two-dimensional spatial feature representation matrix. The time-dependent changes in the context information are computed based on a dynamic context information management hyper-matrix. This humble but powerful representation lends itself to a quasi-real time computing and is able to provide information about foreseeable happenings over multiple situations. The paper uses the practical case of evacuation of a building in fire both as an explorative case for conceptualization of the functionality of the computational mechanism and as a demonstrative and testing application. Our intention is to use the dynamic context computation mechanism as a kernel component of a reasoning platform for informing cyber physical systems.

Topics: Computation
Commentary by Dr. Valentin Fuster
2016;():V01AT02A003. doi:10.1115/DETC2016-59289.

NASA achieved an important milestone in aircraft design the past year by flight testing a shapeshifting wing. The design moved the rear region of the wing through large deflection to provide flap operation for takeoff and landing. The next step is inflight surface modification of the entire wing. Underlying the three dimensional wing is the two-dimensional airfoil shape that anchors the wing aerodynamic performance. Many parametric definition of airfoils have been used for optimizing airfoil and wing aerodynamics but these analysis were made for fixed wing configurations. For flexible airfoils, it is important to recognize that the lofting of shapes in flight will happen around a parent airfoil. From a practical perspective it is likely that only a narrow range of shapes will be possible because of limited actuator locations. With this in mind a new Bézier parameterization scheme is introduced that can reproduce current airfoils with the assurance that original aerodynamics is maintained if not improved. Two Bézier curves are used to define the airfoil. One for the top surface and the other for the bottom surface. It is shown that this parametrization lends itself to fixed abscissa placement of control points for all airfoils, identifying possible actuator locations. Bézier curves change globally to local variation in geometry so a few points can generate an effective flexible airfoil. Coupling these changes with a simple analysis program can easily generate aerodynamic sensitivity information to physical shape changes based on the changes in a limited set of control points. This will provide the ability to create a shape based on a new aerodynamic demand while in flight. This paper presents the development of the parameterization scheme only.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A004. doi:10.1115/DETC2016-59309.

In the past two decades, various CAE technologies and tools have been developed for design, development and specification of the graphical user interface (GUI) of consumer products both in and outside the automotive industry. The growing trend of deploying speech interfaces by automotive manufacturers and the resulting usage of speech requires that the work be extended to speech interface modeling — an area where both technologies and methodologies are lacking.

This paper presents our recent work aimed at developing a speech interface integrated with an existing GUI modeling system. A multi-contour seat was utilized as the testbed for the work. Our prototype allows one to adjust the multi-contour seat with a touchscreen GUI, a steering wheel mounted button coupled with an instrument cluster display, or a speech interface.

The speech interface modeling began with an initial language model, which was developed by interviewing both the experts and novice users. The interview yielded a base corpus and necessary linguistic information for an initial speech grammar model and dialog strategy. After the module was developed it was integrated into the exiting GUI modeling system, in a way that the human voice is treated as a standard input for the system, similar to a press on the touchscreen. The multimodal prototype was used for two customer clinics. In each clinic, we asked a subject to adjust the multi-contour seat using different modalities, including the touchscreen, steering wheel mounted buttons, and the speech interface. We collected both objective and subjective data, including task completion time and customer feedback. Based on the clinic results, we refined both the language model and dialogue strategy. Our work has proven effective for developing a speech-centric, multimodal human machine interface.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A005. doi:10.1115/DETC2016-59351.

Aiming at the necessity of torpedo detecting near field target in final stage of guidance, a non-coaxial (transmitter and receiver are not on the same axis) single beam scanning detecting and ranging system has been designed to be applied in torpedo. To study this detection system, this paper proposes a Monte Carlo simulation method for the system. The backscattering signal and target echo signal in seawater is simulated, and then the Signal-to-Backscattering-Noise (SBNR) is calculated. Furthermore, the relationship between maximum detecting distance and system parameters is calculated based on the criterion of minimum SNBR. Finally, the optimal system parameters are determined to get maximum detection range. For verifying the correctness of the theoretical models, underwater laser detection optical simulation system is designed to do target detecting experiment in a basin. The comparative analyses of the simulation and the experimental results show that the simulation results fit the experimental data well, thus the correctness of the semi-analytical Monte Carlo model is verified. The optimal parameters in single beam scanning detecting system can be determined according to the simulation and experimental results. The designed underwater laser detecting system provides a new method for the torpedo to detect underwater target in final stage of guidance.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A006. doi:10.1115/DETC2016-59423.

Many sensing modalities used in robotics collect information in polar coordinates. For mobile robots and autonomous vehicles these modalities include radar, sonar, and laser range finders. In the context of medical robotics, ultrasound imaging and CT both collect information in polar coordinates. Moreover, every sensing modality has associated noise. Therefore when the position of a point in space is estimated in the reference frame of the sensor, that position is replaced by a probability density expressed in polar coordinates. If the sensor moves from one location to another and the same point is sensed, then the two associated probabilities can be “fused” together to obtain a better estimate than either one individually. Here we derive the equations for this fusion process in polar coordinates. The result involves the computation of integrals of three Bessel functions. We derive new recurrence relations for the efficient computation of these Bessel-functon integrals to aid in the information-fusion process.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A007. doi:10.1115/DETC2016-59880.

Emerging applications for shape memory alloys, such as actuation and energy absorption devices, require better understanding of the mechanics and failure behaviors associated with these materials. In this paper we study the inelastic response of a NiTi alloy under combined thermomechanical actuation. In particular, failure due to strains generated by cooling under isobaric and isothermal conditions is investigated. Strain measurements are performed using a new technique known as Direct Strain Imaging, which provides full field strains of both higher accuracy and higher spatial resolution than previously achievable. The experimental data support the conclusion that the type of fracture observed may be attributed to stress redistribution, caused by a phase transformation.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A008. doi:10.1115/DETC2016-60402.

Mesh generation difficulties can be avoided when a background mesh rather than a mesh that conforms to the geometry is used for the analysis. The geometry is represented by equations and is independent of the mesh and is immersed in the background mesh. The solution to boundary value problems is approximated or piece-wise interpolated using the background mesh. The main challenge is in applying the boundary conditions because the boundaries may not have any nodes on them. Implicit boundary method has been used for linear static and dynamic analysis and has shown to be an effective approach for imposing boundary conditions but has never been applied to nonlinear problems. In this paper, this approach is extended to large deformation nonlinear analysis using the Total Lagrangian formulation. The equations are solved using the widely used modified Newton-Raphson method with loads applied over many load steps. Several test examples are studied and compared with traditional finite element analysis software for verification.

Topics: Deformation , Modeling
Commentary by Dr. Valentin Fuster
2016;():V01AT02A009. doi:10.1115/DETC2016-60508.

There are many instances where creating finite element analysis (FEA) requires extensive time and effort. Such instances include finite element analysis of tree branches with complex geometries and varying mechanical properties. In this paper, we discuss the development of Immediate-TREE, a program and its associated Guided User Interface (GUI) that provides researchers a fast and efficient finite elemental analysis of tree branches. This process was discussed in which finite element analysis were automated with the use of computer generated Python files. Immediate-TREE uses tree branch’s data (geometry, mechanical properties and etc.) provided through experiment and generates Python files, which were then run in finite element analysis software (Abaqus) to complete the analysis. Immediate-TREE is approximately 240 times faster than creating the model directly in the FEA software (Abaqus). The process used to develop Immediate-TREE can be applied to other finite element analysis of biological systems such as bone and tooth.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: AMS: Computational Multiphysics Applications

2016;():V01AT02A010. doi:10.1115/DETC2016-59466.

Hand impairments represent a significant category of injuries, which can be limiting and impeding in the execution of Activities of Daily Living (ADLs). As can be widely appreciated in scientific literature, a great number of solutions has been proposed in last years for rehabilitating and assisting the patient in both mechanical (e.g. object manipulation) and also social tasks (e.g. shaking hands). Among the numerous approaches, robotic Hand Exoskeleton Systems (HES) represent a vast class of solutions to the problem, as they have several advantages. Contrarily to functional electrical stimulation techniques, for example, HES devices are less invasive and entail to a lesser induced muscular fatigue.

In the present work, the authors propose the redesign of a HES robotic device developed at the University of Florence, by means of Topological Optimization (TO) techniques. Even if the existing device is already functional and tested it is still characterized by high encumbrances and masses, in disrespect to the functional requirements. The redesign process has been addressed to a future production of the final object prototype in a titanium alloy, by means of an Electron Beam Melting (EBM) 3D printing machine. The entire procedure was carried out starting from a complete kinematic and dynamic study, followed by the application of TO techniques and it was finally validated by Finite Element Method (FEM) analysis. A single-finger mechanism prototype has been fabricated through additive manufacturing (by means of PolyJet technology) to test the ergonomics and aesthetics of the device.

The problem is introduced and contextualized in the Introduction section, while the methodology is subsequently extensively explained, followed by the presentation of the results. In the Conclusion section, the discussion of the process and the result is presented, while possible improving and developments are briefly hinted at.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A011. doi:10.1115/DETC2016-59620.

In this paper we first present the derivation of the governing equations that describe the multiphysics behavior of Ionic Polymer Composite Plates (IPMC). This is done in a manner that accounts for their non-linear large deflection deformation under the influence of mechanical, electrical, thermal and multicomponent mass transport fields. We subsequently present numerical solutions of the system of these equations via the use of the finite element method for a case of a specific rectangular plate. Emphasis is given in identifying the multiphysics based wrinkling instability behavior that manifest near the corners of these plates due to multiphysics stimuli.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A012. doi:10.1115/DETC2016-59651.

Systems composed of rigid bodies interacting through frictional contact are manifest in several science and engineering problems. The number of contacts can be small, such as in robotics and geared machinery, or large, such as in terrame-chanics applications, additive manufacturing, farming, food industry, and pharmaceutical industry. Currently, there are two popular approaches for handling the frictional contact problem in dynamic systems. The penalty method calculates the frictional contact force based on the kinematics of the interaction, some representative parameters, and an empirical force law. Alternatively, the complementarity method, based on a differential variational inequality (DVI), enforces non-penetration of rigid bodies via a complementarity condition. This contribution concentrates on the latter approach and investigates the impact of an anti-relaxation step that improves the accuracy of the frictional contact solution. We show that the proposed anti-relaxation step incurs a relatively modest cost to improve the quality of a numerical solution strategy which poses the calculation of the frictional contact forces as a cone-complementarity problem.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: AMS: Inverse Problems in Science and Engineering

2016;():V01AT02A013. doi:10.1115/DETC2016-59306.

A nontraditional approach to the nonlinear inverse boundary value problem is illustrated using multiple examples of the Poisson equation. The solutions belong to a class of analytical solutions defined through Bézier functions. The solution represents a smooth function of high order over the domain. The same procedure can be applied to both the forward and the inverse problem. The solution is obtained as a local minimum of the residuals of the differential equations over many points in the domain. The Dirichlet and Neumann boundary conditions can be incorporated directly into the function definition. The primary disadvantage of the process is that it generates continuous solution even if continuity and smoothness are not expected for the solution. In this case they will generate an approximate analytical solution to either the forward or the inverse problem. On the other hand, the method does not need transformation or regularization, and is simple to apply. The solution is also good at damping the perturbations in measured data driving the inverse problem. In this paper we show that the method is quite robust for linear and nonlinear inverse boundary value problem. We compare the results with a solution to a nonlinear inverse boundary value problem obtained using a traditional approach. The application involves a mixture of symbolic and numeric computations and uses a standard unconstrained numerical optimizer.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A014. doi:10.1115/DETC2016-59890.

Forward and inverse modeling of the friction stir welding (FSW) process is an important endeavor, which can be used to optimize process parameters that can play a significant role on achieving specification requirements of manufactured parts. In this work a Reduced Order Modeling (ROM) approach is presented with the aim to reduce the forward step evaluation of an optimization loop that implements the solution of the relevant inverse problem. This is done in order to reduce the overall computational time required. The feasibility and efficiency of this approach is demonstrated through its implementation and the comparison between the full and the reduced order model implementations.

Topics: Friction , Welding , Modeling
Commentary by Dr. Valentin Fuster
2016;():V01AT02A015. doi:10.1115/DETC2016-59972.

Thermal barrier coatings are used to reduce base metal temperature and can be found on many engine components such as turbine blades and exhausts. The presented work is part of a broader effort which is focused on maintaining mechanical properties while improving thermal properties of candidate thermal barrier coating materials. Specifically this effort is investigating new and novel processing techniques to improve thermal properties while maintaining sufficient mechanical properties so that coatings do not fail due to the loads inherent to normal operation of the component. Processing methods have been investigated that create new microstructures by the inclusion of spherical, micron size pores to reflect radiation (i.e. heat) at high temperatures providing additional thermal protection while maintaining strength. This paper computationally examines the size, distribution, and structure of pores that develop during bulk processing of a model material, yttria-stabilized zirconia (YSZ) to aid in the formulation of an optimized process. Heat transfer and stress-displacement analyses are performed to determine effective bulk material properties. Two-dimensional microstructures are the first step towards understanding the impact of pores, voids and microcracks on thermal and mechanical characteristics. In this work two-dimensional microstructures are computer generated to determine the influence on variations in pore number, size and relative percent of pores and cracks. Comparisons are made to experimental measurements when appropriate.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: AMS: Simulation in Advanced Manufacturing

2016;():V01AT02A016. doi:10.1115/DETC2016-59341.

In recent years, industrial nations around the globe have invested heavily in new technologies, software, and services to advance digital design and engineering analysis using the digital thread, data analytics, and high performance computing. Many of these initiatives such as Cloud-Based Design and Engineering analysis (CBDEA) fall under the umbrella of what has become known as Industry 4.0 or Industrial Internet. While an increasing number of companies are developing or already offering commercial cloud-based software packages and services for digital design and engineering analysis, little work has been reported on analyzing and documenting the related state-of-the-art as well as identifying potentially critical research gaps to be addressed in advancing this rapidly growing field. The objective of this paper is to present a state-of-the-art review of digital design and engineering analysis software and services that are currently available on the cloud. The main focus of this paper is on assessing the extent to which design and engineering analysis can already be performed based on the software and services accessed through the cloud. In addition, the key capabilities and benefits of these software packages and services are discussed. Based on the assessment of the core features of commercial CBDEA software and service packages, results suggest that almost all phases of a typical design and engineering analysis process can indeed already be conducted through cloud-based software tools and services.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A017. doi:10.1115/DETC2016-59631.

Despite significant efforts examining the suitability of the proper form of the heat transfer partial differential equation (PDE) as a function of the time scale of interest (e.g. seconds, picoseconds, femtoseconds, etc.), very little work has been done to investigate the millisecond-microsecond regime. This paper examines the differences between the parabolic and one of the hyber-bolic forms of the heat conduction PDE that govern the thermal energy conservation on these intermediate timescales. Emphasis is given to the types of problems where relatively fast heat flux deposition is realized. Specifically, the classical parabolic form is contrasted against the lesser known Cattaneo-Vernotte hyperbolic form. A comparative study of the behavior of these forms over various pulsed conditions are applied at the center of a rectangular plate. Further emphasis is given to the variability of the solutions subject to constant or temperature-dependent thermal properties. Additionally, two materials, Al-6061 and refractory Nb1Zr, with widely varying thermal properties, were investigated.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A018. doi:10.1115/DETC2016-59695.

There is considerable geometric variability of raw castings and weldments before any machining of surfaces that assemble with other components. Consequently, considerable time often is spent identifying successful set-up adjustments at the machining fixtures for such parts in a way to ensure that every machined surface will be complete. The proposed Set-Up-Map© is a point-space subset of R6 where each of the six orthogonal coordinates correspond to one of the rigid-body displacements in three dimensional space: three translations and three rotations. Any point within the Set-Up-Map (S-Map) corresponds to a small body displacement (SBD) of the part that satisfies the condition that each feature will lie within its associated tolerance zone after machining. S-Maps are derived from previous work on Tolerance Maps© (T-Maps), which represent feature deviations allowed by a given tolerance zone. Each raw casting or weldment is scanned, and the point-cloud data fitted to individual features, to determine how much each to-be-machined (TBM) feature deviates from nominal specifications. Each local T-Map is formed from a library, then shifted to be centered on its corresponding scanned feature on each casting; it becomes a local S-Map primitive. Each of these local S-Maps is then transformed to a single global reference frame. The intersection of these S-Map primitives in the global frame gives the allowable small body displacements that satisfy the positioning requirements for all TBM features. Since T-Maps are convex objects, a half-space intersection method is used to generate an S-Map. Any point within the S-Map represents a viable small body displacement specific to the global coordinate system established on the part. In the case that as-cast or as-welded features deviate from what is acceptable, the S-Map will be the empty set. Consequently, in addition to reducing the time for setup in a fixture, S-Maps can serve as a valuable diagnostic to determine that a part should be either scrapped or reworked.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A019. doi:10.1115/DETC2016-60378.

MTConnect is becoming a widely accepted Internet communication standard for factory automation. It is being used for collecting and analyzing data of manufacturing machines and monitoring them over the Internet. This paper describes an implementation of MTConnect RESTful protocol for monitoring networked open-source RepRap based 3D printers in a Cyber Physical Manufacturing Cloud. The open source RepRap based 3D printers are gaining popularity over past few years because of cheaper price, easy-to-use and self-replicating features. The implementation includes establishing communication with the 3D printer, collecting and transmitting status data using MTConnect Adapter developed in a Raspberry Pi and publishing the data in a standard XML format via MTConnect Agent over the Internet. We also describe detailed design of appropriate MTConnect standard dataitems and discuss how to use them to implement probe, current and sample operations of MTConnect Internet communication protocol for factory automation. The performance of the implemented system is evaluated and the results demonstrate excellent feasibility of real-time manufacturing process monitoring of networked open-source 3D printers with MTConnect over the Internet in a Cyber Physical Manufacturing Cloud.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2016;():V01AT02A020. doi:10.1115/DETC2016-60506.

Additive manufacturing (AM) is a new and disruptive technology that comes with a set of unique challenges. One of them is the lack of understanding of the complex relationships between the numerous physical phenomena occurring in these processes. Metamodels can be used to provide a simplified mathematical framework for capturing the behavior of such complex systems. At the same time, they offer a reusable and composable paradigm to study, analyze, diagnose, forecast, and design AM parts and process plans. Training a metamodel requires a large number of experiments and even more so in AM due to the various process parameters involved. To address this challenge, this work analyzes and prescribes metamodeling techniques to select optimal sample points, construct and update metamodels, and test them for specific and isolated physical phenomena. A simplified case study of two different laser welding process experiments is presented to illustrate the potential use of these concepts. We conclude with a discussion on potential future directions, such as data and model integration while also accounting for sources of uncertainty.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: AMS: Uncertainty Quantification in Simulation and Model Verification and Validation

2016;():V01AT02A021. doi:10.1115/DETC2016-59260.

When component dependence is ignored, a system reliability model may have large model (epistemic) uncertainty with wide reliability bounds. This makes decision making difficult during the system design. Component dependence exists due to the shared environment and operation conditions. It is difficult for system designers to model component dependence because they may not have access to component design details if the components are designed and manufactured by outside suppliers. This research intends to reduce the system reliability model uncertainty with a new way for system designers to consider the component dependence implicitly and automatically without knowing component design details. The proposed method is applicable for a wide range of applications where the time-dependent system stochastic load is shared by components of the system. Simulation is used to obtain the extreme value of the system load for a given period of time, and optimization is employed to estimate the system reliability interval. As a result, the epistemic uncertainty in system reliability can be reduced.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A022. doi:10.1115/DETC2016-59431.

In molecular dynamics (MD) simulation, the two main sources of uncertainty are the interatomic potential functions and thermal fluctuation. The accuracy of the interatomic potential functions plays a vital role toward the reliability of MD simulation prediction. Reliable molecular dynamics (R-MD) is an interval-based MD simulation platform, where atomistic positions and velocities are represented as Kaucher (or generalized) intervals to capture the uncertainty associated with the inter-atomic potentials. The advantage of this uncertainty quantification (UQ) approach is that the uncertainty effect can be assessed on-the-fly with only one run of simulation, and thus the computational time for UQ is significantly reduced. In this paper, an extended interval statistical ensemble is introduced to quantify the uncertainty associated with the system control variables, such as temperature and pressure at each time-step. This method allows for quantifying and propagating the uncertainty in the system as MD simulation advances. An example of interval isothermal-isobaric (NPT) ensemble is implemented to demonstrate the feasibility of applying the intrusive UQ technique toward MD simulation.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A023. doi:10.1115/DETC2016-59671.

In a Bayesian network, how a node of interest is affected by the observation of another node is of interest in both forward propagation and backward inference. The proposed global sensitivity analysis (GSA) for Bayesian network aims to calculate the Sobol’ sensitivity index of a node with respect to the node of interest. The desired GSA for Bayesian network confronts two challenges. First, the computation of the Sobol’ index requires a deterministic function while the Bayesian network is a stochastic model. Second, the computation of the Sobol’ index can be expensive, especially if the model inputs are correlated, which is common in a Bayesian network.

To solve the first challenge, this paper uses the auxiliary variable method to convert the path between two nodes in the Bayesian network to a deterministic function, thus making the Sobol’ index computation feasible in a Bayesian network. To solve the second challenge, this paper proposes an efficient algorithm to directly estimate the first-order Sobol’ index from Monte Carlo samples of the prior distribution of the Bayesian network, so that the proposed GSA for Bayesian network is computationally affordable. Before the updating, the proposed algorithm can predict the uncertainty reduction of the node of interest purely using the prior distribution samples, thus providing quantitative guidance for effective observation and updating.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A024. doi:10.1115/DETC2016-59948.

Current design strategies for multi-physics systems seek to exploit synergistic interactions among disciplines in the system. However, when dealing with a multidisciplinary system with multiple physics represented, the use of high-fidelity computational models is often prohibitive. In this situation, recourse is often made to lower fidelity models that have potentially significant uncertainty associated with them. We present here a novel approach to quantifying the discipline level uncertainty in coupled multi-physics models, so that these individual models may later be used in isolation or coupled within other systems. Our approach is based off of a Gibbs sampling strategy and the identification of a necessary detailed balance condition that constrains the possible characteristics of individual model discrepancy distributions. We demonstrate our methodology on both a linear and nonlinear example problem.

Topics: Physics
Commentary by Dr. Valentin Fuster
2016;():V01AT02A025. doi:10.1115/DETC2016-59995.

The proposed study develops a framework that accurately captures and models input and output variables for multidisciplinary systems in order to mitigate the computational cost when uncertainties are involved. Under this framework, the dimension of the random input variables is reduced depending on the degree of correlation calculated by an entropy based correlation coefficient (e). According to the obtained value of e, the dimension is truncated by two different methods. First feature extraction methods, namely Principal Component Analysis and the Auto-Encoder algorithm, are utilized when the input variables are highly correlated. In contrast, the Independent Features Test is implemented as the feature selection method if the correlation is too low to select a critical subset of model features. An Artificial Neural Network, including a Probabilistic Neural Network, is integrated into the framework to correctly capture the complex response behavior of the multidisciplinary system with low computational cost. The efficacy of the proposed method is demonstrated with electro-mechanical engineering examples, including a solder joint and a stretchable patch antenna.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A026. doi:10.1115/DETC2016-60071.

Reliability-based design optimization (RBDO) has been widely used to design engineering products with minimum cost function while meeting defined reliability constraints. Although uncertainties, such as aleatory uncertainty and epistemic uncertainty, have been well considered in RBDO, they are mainly considered for model input parameters. Model uncertainty, i.e., the uncertainty of model bias which indicates the inherent model inadequacy for representing the real physical system, is typically overlooked in RBDO. This paper addresses model uncertainty characterization in a defined product design space and further integrates the model uncertainty into RBDO. In particular, a copula-based bias correction approach is proposed and results are demonstrated by two vehicle design case studies.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A027. doi:10.1115/DETC2016-60195.

Computational models for numerically simulating physical systems are increasingly being used to support decision-making processes in engineering. Processes such as design decisions, policy level analyses, and experimental design settings are often guided by information gained from computational modeling capabilities. To ensure effective application of results obtained through numerical simulation of computational models, uncertainty in model inputs must be propagated to uncertainty in model outputs. For expensive computational models, the many thousands of model evaluations required for traditional Monte Carlo based techniques for uncertainty propagation can be prohibitive. This paper presents a novel methodology for constructing surrogate representations of computational models via compressed sensing. Our approach exploits the approximate additivity inherent in many engineering computational modeling capabilities. We demonstrate our methodology on an analytical function and a cooled gas turbine blade application. The results of these applications reveal substantial computational savings over traditional Monte Carlo simulation with negligible loss of accuracy.

Topics: Uncertainty
Commentary by Dr. Valentin Fuster
2016;():V01AT02A028. doi:10.1115/DETC2016-60222.

In this paper, different approaches to parameter calibration and model validation were compared to understand the accuracy and robustness, especially when only a small number of data are available. Conventional one-point calibration, two-point calibration, sensitivity-based calibration, discrepancy-based calibration methods are compared when the number of data is less than three. An analytical example as well as a cantilever beam model are used to demonstrate the performance and accuracy of different methods. Numerical examples indicate that the conventional calibration method that does not account for the discrepancy function may lead to biased parameter and prediction models. It also can be seen that accurate parameter can be identified only when the form of discrepancy function is accurate.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: AMS/SEIKM/CAPPD: Design, Simulation and Optimization for Additive Manufacturing

2016;():V01AT02A029. doi:10.1115/DETC2016-59408.

This paper focuses on topology optimization of structures subject to a compressive load in a thermal environment. Such problems are important, for example, in aerospace, where structures are prone to thermally induced buckling.

Popular strategies for thermo-elastic topology optimization include Solid Isotropic Material with Penalization (SIMP) and Rational Approximation of Material Properties (RAMP). However, since both methods fundamentally rely on material parameterization, they are often challenged by: (1) pseudo buckling modes in low-density regions, and (2) ill-conditioned stiffness matrices.

To overcome these, we consider here an alternate level-set approach that relies discrete topological sensitivity. Buckling sensitivity analysis is carried out via direct and adjoint formulations. Augmented Lagrangian method is then used to solve a buckling constrained compliance minimization problem. Finally, 3D numerical experiments illustrate the efficiency of the proposed method.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A030. doi:10.1115/DETC2016-59595.

Conductive polymer nanocomposites (CPNCs) have gained a lot of attention by the researchers, in recent times, due to their diverse technological applications in different domains. This paper discusses the additive manufacturing of conductive polymer nanocomposite. Maghemite-Multiwalled carbon nanotubes were synthesized, and later dispersed in an acrylate resin, followed by curing with UV DLP 3D printer, under the presence of external magnetic field. Maghemite-Multiwalled carbon nanotubes showed superior magnetic properties, when compared to Multiwalled carbon nanotubes and lead to improvements in preferential alignment of filler material in the polymer matrix. The initial experimental results show preferential alignment of Maghemite-Multiwalled carbon nanotubes in the polymer matrix under the influence of external uniform magnetic field, at an intensity of 120 Gauss.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A031. doi:10.1115/DETC2016-59627.

Design problems are complex and not well-defined in the early stages of projects. To gain an insight into these problems, designers envision a space of various alternative solutions and explore various performance trade-offs, often manually. To assist designers with rapidly generating and exploring a design space, researchers introduced the concept of design synthesis methods. These methods promote innovative thinking and provide solutions that can augment a designer’s abilities to solve problems. Recent advances in technology push the boundaries of design synthesis methods in various ways: a vast number of novel solutions can be generated using high-performance computing in a timely manner, complex geometries can be fabricated using additive manufacturing, and integrated sensors can provide feedback for the next design generation using the Internet of things (IoT). Therefore, new synthesis methods should be able to provide designs that improve over time based on the feedback they receive from the use of the products. To this end, the objective of this study is to demonstrate a design synthesis approach that, based on high-level design requirements gathered from sensor data, generates numerous alternative solutions targeted for additive manufacturing. To demonstrate this method, we present a case study of design iteration on a car chassis. First, we installed various sensors on the chassis and measured forces applied during various maneuvers. Second, we used these data to define a high-level engineering problem as a collection of design requirements and constraints. Third, using an ensemble of topology and beam-based optimization techniques, we created a number of novel solutions. Finally, we selected one of the design solutions and because of some manufacturability constraints we, 3D-printed a prototype for the next generation of design at one third scale. The results show that designs generated from the proposed method were up to 28% lighter than the existing design. This paper also presents various lessons learned to help engineers and designers with a better understanding of challenges applying new technologies in this research.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A032. doi:10.1115/DETC2016-59634.

Physically accurate modeling of powder-based additive manufacturing (AM) processes can play an enabling role for both the certification and qualification as well as the functional tailoring of materials produced by these processes. In an effort to address these needs in a computationally efficient and physically realistic manner, this paper presents the initial efforts towards the development of a methodology for simulating polydisperse particle-based AM processes by the use of the Multiphysics Discrete Element Method (MDEM). We discuss the formulation of a DEM framework for addressing the unique multiphysics behavior of AM materials and processes. In particular, we focus on coupled thermo-mechanical effects that result in residual strains and deformation. The MDEM approach is demonstrated on several test problems involving laser sintering of metal powders. The paper concludes with a discussion on how this approach may be generalized to broader classes of AM systems, and details are given regarding future work that must be accomplished in order to further develop the present methodology.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A033. doi:10.1115/DETC2016-59638.

One crucial component of the additive manufacturing software toolchain is a class of geometric algorithms known as “slicers.” The purpose of the slicer is to compute a parametric toolpath defined at the mesoscale and associated g-code commands, which direct an additive manufacturing system to produce a physical realization of a three-dimensional input model. Existing slicing algorithms operate by application of geometric transformations upon the input geometry in order to produce the toolpath. In this paper we introduce an implicit slicing algorithm that computes mesoscale toolpaths from the level sets of heuristics-based or physics-based fields defined over the input geometry. This enables computationally efficient slicing of arbitrarily complex geometries in a straight forward fashion. The calculation of component “infill” is explored, as a process control parameter, due to its strong influence on the produced component’s functional performance. Several examples of the application of the proposed implicit slicer are presented. It is demonstrated — via proper experimentation — that the implicit slicer can produce a mesoscale structure leading to objects of superior functional performance such as greatly increased stiffness and ultimate strength without an increase of mass. We conclude with remarks regarding the strengths of the implicit approach relative to existing explicit approaches, and discuss future work required in order to extend the methodology.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A034. doi:10.1115/DETC2016-59644.

In order to predict the effects of energy and material deposition via laser and powder-jet based additive manufacturing methods, it is necessary to model a number of appropriate key process phenomena. In addition to solving the classical transient heat equation subject to a moving heat source, it is also critical that local, transient changes in domain geometry and properties also be addressed in order to approach as-build geometry and its associated functional behavior. Furthermore, the melting/solidification behavior of the deposited material may also need to be addressed due to its implications to local temperature-time histories. Finally, incorporating process parameters into a comprehensive simulation is also essential in providing accurate, high fidelity predictions. This work presents efforts at incorporating all of the above-mentioned phenomena via a finite element-based simulation framework to lay the groundwork for full-scale, fully coupled simulations of entire parts. A comparison of predictions including and omitting phase transformation effects along with mass conservation is also presented in the context of assessing the accuracy gained versus the requisite computational expense.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A035. doi:10.1115/DETC2016-59723.

Developing cohesive finite element simulation models of the pull-up process in bottom-up stereo-lithography (SLA) system can significantly increase the reliability and through-put of the bottom-up SLA process. Pull-up process modeling investigates relation between motion profile and crack initialization and propagation during the separation process. However, finite element (FE) simulation of the pull-up process is computationally very expensive and time-consuming. This paper outlines a method to quickly predict the separation stress distribution based on 2D shape grid mapping and neural network. Sixteen cohesive FE models with various cross-section shapes form our database. Specific 2D shape grid mapping was utilized to describe each shape by generating a sorted binary vector. A backpropagation (BP) neural network was then trained using binary vectors, material properties, and FE simulated pull-up separation stress distribution. Given material properties, the trained model can then be used to predict the pull-up separation stress distribution of a new shape. The results demonstrate that the proposed data driven method can drastically reduce computing costs. The comparison between the predicted values by the data driven approach and simulated FE models verify the validity of the proposed method.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A036. doi:10.1115/DETC2016-59730.

This paper presents a homogenization approach for estimating the mechanical properties of periodic lattice structures; it is based on semi-rigid joint frame elements, and it takes into account joint stiffening effects. Geometrical degradation that arises from the additive manufacturing process is considered and integrated into a homogenization procedure that uses structural element parameters derived from a frame element with semi-rigid joints. The effective values of the structural parameters are calculated from a novel as-fabricated model describing an additively fabricated strut and are incorporated into proposed semi-rigid frame elements. The semi-rigid joint frame element is integrated into a discrete homogenization process. This paper reports the results of parametric studies that compare the effective structural parameters and the estimated mechanical properties to study the impacts of joints and geometric degradation due to additive manufacturing. The results of the proposed approach are compared with previous homogenization approaches, and show that the proposed method provides more accurate estimates.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A037. doi:10.1115/DETC2016-59738.

Parts with complex geometry can be produced by additive manufacturing processes without a significant increase in fabrication time and cost. One application of AM technologies is to fabricate customized lattice-skin structures which can enhance the functional performance of products with less material and less weight. In this paper, a brief comparison between different types of lattice structures and their related design methods has been done. The result shows that conformal lattice structures may perform better than other types of lattice due to its unique configuration for some design cases. However, most existing design methods of conformal lattice have a limitation to deal with complex external geometry. To solve this issue and fully utilize conformal lattice structures, a general design method for a conformal lattice-skin structure is proposed. This design method consists of two major design stages. At the beginning design stage, conformal surfaces are selected based on proposed general design guidelines. Then two different lattice frame generation methods are provided to generate conformal lattice to fit the selected conformal surfaces. A comparison between these two methods is made to help designers select a suitable method for their design cases. In the second design stage, the thickness of each lattice strut is calculated based on a defined mapping function. This mapping function generally considers two important factors from the result of topology optimization. They are optimal relative density distribution and its related principle stress direction. Based on the calculated strut’s thickness, the geometry model of heterogeneous conformal lattice can be generated. At the end of the design process, skin structures can be added on the generated heterogeneous conformal lattice. To further illustrate and validate the proposed design method, a design case of handle connector is provided. The result of this case study shows this method can provide an efficient tool for designers to generate the conformal lattice-skin structure for a complex external shape.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A038. doi:10.1115/DETC2016-59945.

Selective laser sintering (SLS) printers have been used for rapid prototyping, and the prototypes of part assemblies have been reported to expand or shrink over time. This paper examines the hygroscopic swelling behavior of 3D printed parts from SLS printers. A total of 10 hexahedron samples were produced using nylon-12, which is a common material used for prototyping. Half of the samples were exposed to a high temperature to reduce the moisture content, and the rest were left at a room temperature. In the meantime, 13 dimensions of each sample were measured periodically along with the local weather records including relative humidity in order to track the hygroscopic swelling behavior of the samples. The results showed that the deformation was mostly occurred to the dimensions parallel to the sintering layers. Also, changes in these dimensions were found to have a high correlation with the relative humidity regardless of temperature conditions. These results imply that changes in environmental conditions such as relative humidity result in the deformation of 3D printed parts after production. The high correlation between dimension change and relative humidity also indicates the layup orientation is a decisive factor to predict the deformation of 3D printed parts. Thus, unexpected deformation of 3D printed parts can be avoided by optimizing the parts design considering the layup orientation and by controlling the environmental conditions.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A039. doi:10.1115/DETC2016-59969.

The advent of Additive Manufacturing (AM) process has greatly broadened the machining methods. Compared to conventional manufacturing methods, the process planning for AM is totally different. It should avoid process-induced defects such as warpage of overhang features. Process planning for AM should also generate necessary support structure not only to support the overhang structure but also to minimize thermal warpage and residual stress. In order to do so, a general process planning for AM is put forward in this paper. Given a specific part, the first step is the determination of build orientation. The choice of build orientation is one of the critical factors in AM since the build time, the material consumption, the removal of support structure, the deformation within final parts, the mechanical performance, and the surface roughness are all related to the build orientation. This paper utilizes the genetic algorithm to optimize the build orientation by considering the minimum volume of the support structure and the minimum strain energy of a part under specific working conditions. First, a general and feasible process planning for AM is proposed. Then detailed process planning for the optimization of build orientation is developed. The volume of support structure and strain energy are considered independently and corresponding optimal build orientations are obtained through genetic algorithm. A single weighted aggregate optimization function is then constructed to optimize the volume of support structure and strain energy simultaneously. Finally, a bracket is used to verify the feasibility of the proposed method.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A040. doi:10.1115/DETC2016-60012.

With rapid developments and advances in additive manufacturing technology, lattice structures have gained considerable attention. Lattice structures are capable of providing parts with a high strength to weight ratio. Most work done to reduce computational complexity is concerned with determining the optimal size of each strut within the lattice unit-cells but not with the size of the unit-cell itself. The objective of this paper is to develop a method to determine the optimal unit-cell size for homogenous periodic and conformal lattice structures based on the strain energy of a given structure. The method utilizes solid body finite element analysis (FEA) of a solid counter-part with a similar shape as the desired lattice structure. The displacement vector of the lattice structure is then matched to the solid body FEA displacement results to predict the structure’s strain energy. This process significantly reduces the computational costs of determining the optimal size of the unit cell since it eliminates FEA on the actual lattice structure. Furthermore, the method can provide the measurement of relative performances from different types of unit-cells. The developed examples clearly demonstrate how we can determine the optimal size of the unit-cell based on the strain energy. Moreover, the computational cost efficacy is also clearly demonstrated through comparison with the FEA and the proposed method.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2016;():V01AT02A041. doi:10.1115/DETC2016-60140.

Additive manufacturing, also known as 3D printing, enables production of complex customized shapes without requiring specialized tooling and fixture, and mass customization can then be realized with larger adoption. The slicing procedure is one of the fundamental tasks for 3D printing, and the slicing resolution has to be very high for fine fabrication, especially in the recent developed Continuous Liquid Interface Production (CLIP) process. The slicing procedure is then becoming the bottleneck in the pre-fabrication process, which could take hours for one model. This becomes even more significant in mass customization, where hundreds or thousands of models have to be fabricated. We observe that the customized products are generally in a same homogeneous class of shape with small variation. Our study finds that the slicing information of one model can be reused for other models in the same homogeneous group under a properly defined parameterization. Experimental results show that the reuse of slicing information have a maximum of 50 times speedup, and its utilization is dropped from more than 90% to less than 50% in the pre-fabrication process.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A042. doi:10.1115/DETC2016-60181.

In the paper a digital material design framework is presented to compute multi-material distributions in three-dimensional (3D) model based on given user requirements for additive manufacturing (AM) processes. It is challenging to directly optimize digital material composition due to extremely large design space. The presented material design framework consists of three stages. In the first stage, continuous material property distribution in the geometric model is computed to achieve the desired user requirements. In the second stage, a material dithering method is developed to convert the continuous material property distribution into 3D printable digital material distribution. A tile-based material patterning method and accordingly constructed material library are presented to efficiently perform material dithering in the given 3D model. Finite element analysis (FEA) is used to evaluate the performance of the computed digital material distributions. To mimic the layer-based AM process, cubic meshes are chosen to define the geometric shape in the digital material design stage, and its resolution is set based on the capability of the selected AM process. In the third stage, slicing data is generated from the cubic mesh model and can be used in 3D printing processes. Three test cases are presented to demonstrate the capability of the digital material design framework. Both FEA-based simulation and physical experiments are performed; in addition, their results are compared to verify the tile-based material pattern library and the related material dithering method.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V01AT02A043. doi:10.1115/DETC2016-60233.

Additive manufacturing (AM) is a promising technology that is expected to revolutionize industry by allowing the production of almost any shape directly from a 3D model. In metal-based AM, numerous process parameters are highly interconnected, and their interconnections are not yet understood. Understanding this interconnectivity is the first step in building process control models that help make the process more repeatable and reliable. Metamodels can be used to conceptualize models of complex AM processes and capture diverse parameters to provide a graphical view using common terminology and modeling protocols. In this paper we consider different process models (laser and thermal) for metal-based AM and develop an AM Process Ontology from first-principles. We discuss and demonstrate its implementation in Protégé.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A044. doi:10.1115/DETC2016-60245.

Hybrid stereolithograpgy (SLA) process synthesizes the laser scanning based SLA system and mask projection based SLA system. It adopts laser as the energy source for scanning the border of a 2D pattern, whereas a mask image is used to solidify the interior area. By integrating the merits of the two subsystems, the hybrid SLA process can achieve relatively high surface quality without sacrificing the productivity. For the hybrid system, closed polygon contours are required to direct laser scanning, and a binary image is also needed for mask projection. We proposed a novel image based slicing method. This method can convert the 3D model into a series of binary images directly, and each image is corresponding to the cross-session of the model at a specific height. Based on the resultant binary image, we use image processing method to gradually shrink the image. The contours of shrunk image are traced and then restored as polygons to direct the laser spot movement. The final shrunk image will serve as the input for mask projection. The experimental result of several test cases demonstrate that proposed method is substantially more time-efficient than traditional approaches.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A045. doi:10.1115/DETC2016-60320.

Functionally graded materials (FGM) have recently attracted a lot of research attention in the wake of the recent prominence of additive manufacturing (AM) technology. The continuously varying spatial composition profile of two or more materials affords FGM object to simultaneously possess ideal properties of multiple different materials. Additionally, emerging technologies in AM domain enables one to make complex shapes with customized multifunctional material properties in an additive fashion, where laying down successive layers of material creates an object. In this paper, we focus on providing an overview of research at the intersection of AM techniques and FGM objects. We specifically discuss the FGM modeling representation schemes and outline a classification system to classify existing FGM representation methods. We also highlight the key aspects such as the part orientation, slicing, and path planning processes that are essential for fabricating a quality FGM object through the use of multi-material AM techniques.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A046. doi:10.1115/DETC2016-60473.

The application of additive manufacturing technologies in the industry is growing fast. This leads to an increasing need for reliable modeling techniques in the field of additive manufacturing. A methodology is proposed to systematically assess the influence of process parameters on the final characteristics of additively manufactured parts. The current study aims at presenting a theoretical framework dedicated to the modeling of the additive manufacturing technology. More specifically, the framework is used in the context of the study to plan and optimize the experimental process to minimize the amount of experiments required to populate the model. The framework presented is based on the Dimensional Analysis Conceptual Modelling framework (DACM). DACM is an approach supporting the production of models. This approach is designing networks representing a system architecture and behavior using an approach sharing similarities with neural networks. Based on the proposed approach, it is possible to detect where supplementary experimental data have to be collected to complete the model generated by the DACM approach. The modeling of the Direct Material Deposition process is conducted as an illustrative case study. The scope of the approach is vast and supported by validated scientific methods combined to form the core of the DACM method. The DACM framework is step by step extracting information from a description of the system architecture to create semi-automatically a model that can be simulated and used for multiple types of analyses associated for example with innovation and design improvement. The current paper will focus on the usage of the DACM framework, recently developed in a project, in the field of additive manufacturing.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A047. doi:10.1115/DETC2016-60507.

Advancements in the capabilities of additive manufacturing (AM) have increased its usage as an appropriate manufacturing process, particularly when the number of parts in an assembly can be significantly reduced, production volumes are low, or geometric complexity is difficult, if not impossible, to obtain through conventional subtractive processes. However, there are many reasons why it is best to not design a given part based on AM technology. The choice of conventional versus AM manufacturing must occur as early as possible in the design process as this choice can substantially affect how the product is designed. Making the wrong decision will lead to wasted design time, increased time to market the product, a functionally inferior design, and/or a costlier product. To address this critical manufacturing decision, we introduce a usable template and a decision making method for manufacturing process selection which is integrated early into the design process (DS-SAM). This work can serve as the logical foundation for a potential holistic and more mathematically rigorous formulation toward a decision making method that could infer design evaluations based on designer inputs. This approach improves early design efficiency and effectiveness by methodically focusing on the key design process elements to optimally compare alternatives earlier in a design process. The benefits and potential cost savings of using the DS-SAM approach are demonstrated by a pair of case studies, and the results are discussed.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: CAPPD: Emotional Engineering

2016;():V01AT02A048. doi:10.1115/DETC2016-59257.

This paper points out that we are now moving into the age of self-actualization, which Maslow proposed as the highest needs of humans. Therefore, we have to consider how we can engage our customers in our engineering. Just as web technology moved from Web 1.0 to Web 2.0, we have to consider moving from Engineering 1.0 to Engineering 2.0. Although User Experience opened our eyes to the importance of the value of processes, we should not look at our customers just as product users. They are not passive users, but they are very active and would like to externalize their intrinsic motivations. That is why they are called customers. If we develop a new industry framework which allows interchanges of modules/parts across products and industries, we can customize their experience and they can feel the joy of self-determination, self-growth, achievement and fulfillment, just as we do when we play with Lego.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A049. doi:10.1115/DETC2016-59500.

What is User Experience (UX) and how does it relate to Usability? The understanding of the mutual relationship between UX and product usability has a strong impact on product design methodologies and on the design outcomes’ success.

This paper aims to investigate such relationship thanks to a set of experiments around the design of interactive products without demanding completeness of findings.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A050. doi:10.1115/DETC2016-60021.

This paper discusses human action modeling and its application to a control system that uses the Mahalanobis – Taguchi System (MTS). In this study, we define embodied knowledge as being included in tacit knowledge. We also define a set of skills based on experiences and intuitive sense as seen in creating an art, sport, craft, or other skilled task. Embodied knowledge is difficult to express explicitly. As our goals, we analyze embodied knowledge acquisition for human action modeling and apply to a control system by using MTS.

An analysis of embodied knowledge using devices and pattern-recognition techniques to recognize un-explicit knowledge are being developed owing to recent improvements in technology. Embodied knowledge acquisition element of recognition can be represented as a pattern recognition technique. In this paper, we confirm that MTS is an adaptive method for recognizing pattern in human action modeling.

We set up a control model including a controller using the MTS, which is modeled after an internal model of the cerebellum. We apply the controller based on the Recognition Taguchi (RT) method to invert the control of the pendulum. The result indicate that the controller is capable of detecting disturbance.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A051. doi:10.1115/DETC2016-60452.

Here, we investigate and discuss the effect of accuracy of imitation for improvement of skills on brain activity. In order to improve the skills, learners combine and accumulate information of the skills through practice. Thus, we used near-infrared spectroscopy (NIRS) to investigate brain activity during the process of improvement. Evaluation of the level of knowledge acquisition with monitoring of brain activity can be an indicator of the learner’s degree of skill progression.

Therefore, our final goal is constructing a new learning model based on brain activity monitoring and improving learning efficiency. We experimented on the assembly operation by imitation learning that assumed work in the manufacturing industries from a previous example. As a result, we showed the possibility of brain activity shift with improvement of the skill. In this article, we targeted task accuracy and investigated whether the brain activity shift is caused by a progress in the task accuracy, act of practice, or some other factor. As a result, we showed a possibility that the trend shift in the right and left dorsolateral prefrontal area and frontal pole was not caused by the simple task accuracy improvement but by the action of practice, which helped subjects store the information.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A052. doi:10.1115/DETC2016-60525.

Product designers are more and more addressing the design of product experience, in addition to the more traditional product design, where the focus of the design practice is on the interaction between the product and its users. Traditional methods and tools used so far for the design of products are not suitable to the design of experience. Among the emerging methods, experience prototyping seems effective and well addressing the new requirements of experience design. The paper describes the emerging technologies enabling experience prototyping, and provides some examples where this new methodology for experience design has been applied.

Topics: Design
Commentary by Dr. Valentin Fuster
2016;():V01AT02A053. doi:10.1115/DETC2016-60541.

Every year approximately more than one million people die on world’s road. Human factors are the largest contributing factors to the traffic crashes and fatality, and recent researches have identified drivers’ cognitive aspect as the major cause of human errors in 80% of crash events. Thus, the development of countermeasures to manage drivers’ cognitive aspect is an important challenge to address. Driver-Assistance Systems have been developed and integrated into vehicles to acquire data about the environment and the driver, and to communicate information to the driver, usually via the senses of vision and hearing. Unfortunately, these senses are already subjected to high demands, and the visual and auditory stimuli can be underestimate or considered as annoying. However, other sensory channels could be used to elicit the drivers’ cognitive aspect. In particular, smell can impact on various aspects of humans’ psychological state, such as people’s attention level, and can induce activation states in people.

The research presented in this paper aims at investigating whether olfactory stimuli, instead of auditory ones, can be used to influence the cognitive aspect of the drivers. For this purpose, an experimental framework has been set up and experimental testing sessions have been performed. The experimental framework is a multisensory environment consisting of an active stereo-projector and a screen used for displaying a video that reproduces a very monotonous car trip, a seating-buck for simulating the car environment, a wearable Olfactory Display, in-ear earphones and the BioGraph Infiniti system for acquiring the subjects’ physiological data.

The analysis of the data collected in the testing sessions shows that, in comparison to the relaxation state, olfactory stimuli are effective in increasing subjects’ attention level more than the auditory ones.

Topics: Vehicles
Commentary by Dr. Valentin Fuster
2016;():V01AT02A054. doi:10.1115/DETC2016-60542.

Today’s world is facing numerous problems due to an un-controlled waste of energy and of primary resources in general. To manage this, on one side designers are asked to improve the efficiency of products; on the other side, users must be trained toward a more sustainable lifestyle. Some researchers are exploring the idea of trying to change users’ behavior while interacting with products in order to make it more sustainable. This trend is known as “design for sustainable behavior” applied to energy/resources consumption issues. Our idea is to stimulate users in changing their behavior by introducing a multisensory communication with the product. This communication is not meant as warning messages informing the users about wrong habits/actions or something like; instead, it should consist of sensorial stimuli able to naturally drive users in performing the right actions. However, before designing these stimuli, it is fundamental to highlight the aspects and conditions that do not allow users behaving in a sustainable way when interacting with products. In this paper we discuss about the aspects that could be useful to explore in order to retrieve the specifications to drive both the design and prototyping phases, so as to faithfully test the effectiveness of the feedback with final users.

Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: CAPPD: Human-Centric Design

2016;():V01AT02A055. doi:10.1115/DETC2016-59561.

In this paper, we present a formal and efficient method for computing structural performance variation over its shape population. Each shape in the population is represented as discrete points. These shapes are then aligned together and principal component analysis is conducted to obtain the shape variation, which is represented as a sum of variations in multiple principal modes. Finite element analysis is conducted on the mean shape. For each shape specified by the shape parameters, we then invoke a thin-plate deformation based scheme to automatically deform the mesh nodes. The performance of the shapes is approximated via Taylor series expansion of the FE solution of the mean shape. Numerical study illustrates the accuracy and efficiency of this method.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2016;():V01AT02A056. doi:10.1115/DETC2016-59740.

Mechanical assembly belongs to the last stage of a complete mechanical product manufacturing, which usually involves many manual operations. Virtual assembly can be used to simulate a real product’s assembly process, and to assess the feasibility of the assembly scenario of a product during simulation processes. Virtual human operation simulation is an important part of virtual assembly simulation. In order to improve the fidelity and increase the accuracy of posture simulation for virtual human assembly operation, a multi-objective motion posture optimization scheme for virtual human operation is proposed in the paper. Since the real human body is a complex movement system, the virtual human is modeled as a simplified multi-rigid-body model to decrease complexity. According to ergonomics knowledge and requirements, three elements including joint angle, joint moment and operation field of vision are selected as the criteria to evaluate the virtual human’s motion. These elements are normalized and used for setting the optimization objective of human body assembly operation assessment. Optimal target functions with variables constraints used for the posture optimization problem are constructed in mathematical expression. As there are many rigid-body joint variables, it is difficult to solve the optimization model directly. The optimization model is decomposed according to different joint chains and the operating characteristics of the human body. A multi-objective NSGA-II algorithm is introduced to solve the optimization model, which finally generated a complete and continuous solution of the virtual human assembly operation motion. A case study of an engine connecting rod cap assembly is performed to demonstrate the effectiveness of the proposed optimization method.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A057. doi:10.1115/DETC2016-60202.

CAD systems have powerful features for creative technical design, yet these features are exposed through highly restrictive user interfaces. We argue that CAD users would be more productive and creative if they had greater control over their interface configuration. We propose and specify a feature set for a reconfigurable CAD user interface system. We review our prototype implementation of the proposed system and several use cases where a reconfigurable user interface would be beneficial. We present insights from our experience with popular CAD systems, various reconfigurable text editors, and our prototype CAD system. This work focuses on enhancing the utility of mice and keyboards but can be extended to any input device. Planned user studies are presented.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A058. doi:10.1115/DETC2016-60346.

A smart choice of contact forces between robotic grasping devices and objects is important for achieving a balanced grasp. Too little applied force may cause an object to slip or be dropped, and too much applied force may cause damage to delicate objects. Prior methods of grasping force optimization in literature have mostly assumed grasp only at the fingertips but have rarely considered how the whole hand grasps more common to anthropomorphic hands affect the optimization of grasping forces. Further, although numerical examples of grasping force optimization methods are routinely provided, it is often difficult to compare the performance of separate methods when they are evaluated using different parameters, such as the type of grasping device, the object grasped, and the contact model, among other factors. This paper presents three optimization approaches (linear, nonlinear, and nonlinear with linear matrix inequality (LMI) friction constraints) which are compared for an anthropomorphic hand. Numerical examples are provided for three types of grasp commonly performed by the human hand (cylindrical grasp, tip grasp, and tripod grasp) using both soft finger contact and point contact with friction models. Contact points between the hand and the object are predetermined. Results are compared based on their accuracy, computational efficiency, and other various benefits and drawbacks unique to each method. Future work will extend the problem of grasping force optimization to include consideration for variable forces and object manipulation.

Topics: Optimization
Commentary by Dr. Valentin Fuster

36th Computers and Information in Engineering Conference: CAPPD/AMS: Human Modeling: Methods and Applications in Engineering

2016;():V01AT02A059. doi:10.1115/DETC2016-59070.

One of the common usages of a captured human body motion in automotive application is creating swept volumes of the body surfaces based on the trajectories of its motion. Recent development of depth sensors enables fast and natural motion capture without attaching markers on subjects’ bodies. Microsoft Kinect is one of widely used depth sensors. It can track a whole body motion and output a skeleton model.

A new method is developed to create the swept volumes from the motion captured by Kinect using an open source graphic system. The skeleton motion is recorded in a file format that is flexible to retain the skeleton’s structure and acceptable to various graphic systems. The motion is then bound with a surface manikin model in the graphic system, where the swept volumes are generated. This method is more flexible and portable than utilizing a commercial digital manikin, and potentially provides more accurate result.

Topics: Sensors
Commentary by Dr. Valentin Fuster
2016;():V01AT02A060. doi:10.1115/DETC2016-59095.

Parkinson’s disease (PD) is difficult to detect before the onset of symptoms; further, PD symptoms share characteristics with symptoms of other diseases, making diagnosis of PD a challenging task. Without proper diagnosis and treatment, PD symptoms including tremor, bradykinesia, and cognitive problems deteriorate quickly into patients’ late life. Among them, the most distinguishable manifestations of PD are rest and postural tremor. Tremor is defined as an involuntary shaking or quivering movement of the hands or feet. Unified Parkinson’s Disease Rating Scale (UPDRS), Hoehn and Yahr (H&Y) scales are the most common rating scales that quantify the severity of PD. Due to the lack of consistency in these diagnostic tests, researchers are looking for devices for quantification and detection that can provide more objective PD motor assessments. Additionally, since there is currently no cure for PD, temporary PD symptom suppression is an active research area for improving patients’ quality of life. In this survey, the current state of research on Parkinson’s disease hand tremor quantification, detection, and suppression is discussed, especially focusing on electromechanical devices. The future direction of research on these devices is also considered.

Topics: Engines , Motors , Diseases
Commentary by Dr. Valentin Fuster
2016;():V01AT02A061. doi:10.1115/DETC2016-59107.

Cyclic human gait is simulated in this work by using a 2D musculoskeletal model with 12 degrees of freedom (DOF). Eight muscle groups are modeled on each leg. Predictive dynamics approach is used to predict the walking motion. In this process, the model predicts joints dynamics and muscle forces simultaneously using optimization schemes and task-based physical constraints. The results indicated that the model can realistically match human motion, ground reaction forces (GRF), and muscle force data during walking task. The proposed optimization algorithm is robust and the optimal solution is obtained in seconds. This can be used in human health domain such as leg prosthesis design.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A062. doi:10.1115/DETC2016-59108.

The objective of this study was to develop a discomfort function for including a high DOF upper body model during walking. A multi-objective optimization (MOO) method was formulated by minimizing dynamic effort and the discomfort function simultaneously. The discomfort function is defined as the sum of the squares of deviation of joint angles from their neutral angle positions. The dynamic effort is the sum of the joint torque squared. To investigate the efficacy of the proposed MOO method, backward walking simulation was conducted. By minimizing both dynamic effort and the discomfort function, a 3D whole body model with a high DOF upper body for walking was demonstrated successfully.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A063. doi:10.1115/DETC2016-59168.

3D equipment interaction module in human motion simulation is developed in this paper. A predictive dynamics method is used to simulate human motion, and a helmet is modeled as the equipment that is attached to the human body. We then implement this method using the predictive dynamics task of walking. A mass-spring-damper system is attached at the top of the head as a helmet model. The equations of motion for the helmet are also derived in a recursive Lagrangian formulation within the same inertial reference frame as the human model’s. The total number of degrees of freedom for the human model is 55 — 6 degrees of freedom for global translation and rotation, and 49 degrees of freedom for the body. The helmet has 7 degrees of freedom, but 6 of them are dependent to the human model. The movement of the helmet is analyzed due to the human motion. Then, the reaction force between the human body and the equipment is calculated. Once the reaction force is obtained, it is applied to the human body as an external force in the predictive dynamics optimization process. Results include the motion of equipment, the force acting on body at the attachment point, the joint torque profiles, and the ground reaction force profiles at the foot contacting point.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A064. doi:10.1115/DETC2016-59170.

Elderly and disabled individuals must be able to access the indoor and outdoor environment in an easy and safe manner. Accessibility must be assessed not only in terms of physical friendliness for users, but also of cognitive friendliness such as the ease of wayfinding. To ensure the ease of wayfinding, signage information available at every key decision point is essential because it enables people to find their way in unfamiliar environment. The aim of the present study is to develop virtual accessibility evaluation system that evaluates the environment accessibility from the cognitive friendliness aspect, such as the ease of wayfinding, by combining realistic human behavior simulation using a digital human model (DHM) with as-is environment models. To realize this system, we develop a vision-based wayfinding simulation algorithm for the DHM in textured three-dimensional (3D) as-is environment models. The as-is environment models are constructed with the structure-from-motion (SfM) technique. During the wayfinding simulation, the visibility and legibility of each sign are evaluated on the basis of the visual perception of the DHM and its visibility catchment area (VCA). The DHM walking trajectory is dynamically generated depending on the perceived sign. When a disorientation place is detected where the DHM cannot find any sign indicating its destination, plans for rearranging the signs are proposed by the simulation user and then examined using a developed virtual eyesight simulator (VES). The VES enables the user to check the DHM eyesight virtually during the wayfinding simulation through a head-mounted display. To mimic visual impairments, visual impairment filters are introduced into the VES. In this paper, we demonstrate the process of detecting the disorientation place, and planning and evaluating the rearranged signage.

Topics: Simulation
Commentary by Dr. Valentin Fuster
2016;():V01AT02A065. doi:10.1115/DETC2016-59675.

According to the Bureau of Labor Statistics, in 2014, nursing and residential care facilities had the highest incidence rate of total nonfatal occupational injury cases in the U.S. Manual patient handling tasks result in high lumbar load (Jager et al., 2013), and most of work-related back disorders in nurses are related to patient transfers. The present pilot study seeks to determine if there are significant differences in the motion of experienced nurses and novice nurses while performing the same patient repositioning tasks. A motion capture experiment was conducted in a laboratory setting on 14 female nurses performing two patient repositioning tasks (moving patient toward the head of the bed; transferring patient from bed to a wheelchair). Of the nurses selected, 7 were experienced nurses (greater than 5 years of nursing experience), and 7 were novice nurses (between 0 and 2 years of nursing experience). The motion capture data were post processed using Cortex and Visual3D software. Average and maximum joint angles for the spine, knees, elbows, and shoulders for each task were compared between the novice and experienced nurses using a Wilcoxon Rank Sum test to determine whether there were significant differences in motion for the same patient repositioning tasks. Although significant differences were not found for average or maximum joint angles between the novice and experienced groups, there was a significant difference in variances between the novice and experienced groups for some angles for the wheelchair task.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A066. doi:10.1115/DETC2016-59891.

Human-like motion prediction and simulation is an important task with many applications in fields such as occupational-biomechanics, ergonomics in industrial engineering, study of biomechanical systems, prevention of musculoskeletal disorders, computer-graphics animation of articulated figures, prosthesis and exoskeletons design as well as design and control of humanoid robots, among others.

In an effort to get biomechanical insight in many human movements, extensive work has been conducted over the last decades on human-motion prediction of tasks as: walking, running, jumping, standing from a chair, reaching and lifting. This literature review is focused on the STS motion and the LLM. STS is defined as the process of rising from a chair to standing up position without losing stability balance, it is the most ubiquitous and torque-demanding daily labor and it is closely related to other capabilities of the human body. LLM is defined as the activity of raising a load, generally a box, from a low to a higher position while stability is maintained, this task produces a high number of incidences of low-back pain and injuries in many industrial and domestic activities.

In order to predict STS and LLM, two methods have been identified: these are the OBMG method and the CBMG method.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A067. doi:10.1115/DETC2016-60095.

The wide spread use of 3D acquisition devices with high-performance processing tools has facilitated rapid generation of digital twin models for large production plants and factories for optimizing work cell layouts and improving human operator effectiveness, safety and ergonomics. Although recent advances in digital simulation tools have enabled users to analyze the workspace using virtual human and environment models, these tools are still highly dependent on user input to configure the simulation environment such as how humans are picking and moving different objects during manufacturing. As a step towards, alleviating user involvement in such analysis, we introduce a data-driven approach for estimating natural grasp point locations on objects that human interact with in industrial applications. Proposed system takes a CAD model as input and outputs a list of candidate natural grasping point locations. We start with generation of a crowdsourced grasping database that consists of CAD models and corresponding grasping point locations that are labeled as natural or not. Next, we employ a Bayesian network classifier to learn a mapping between object geometry and natural grasping locations using a set of geometrical features. Then, for a novel object, we create a list of candidate grasping positions and select a subset of these possible locations as natural grasping contacts using our machine learning model. We evaluate the advantages and limitations of our method by investigating the ergonomics of resulting grasp postures.

Commentary by Dr. Valentin Fuster
2016;():V01AT02A068. doi:10.1115/DETC2016-60131.

This paper presents a methodology and tools to improve the design of lower limb prosthesis through the measurement of pressure analysis at the interface residual limb-socket. The steps of the methodology and the design tools are presented using a case study focused on a transfemoral (amputation above knee) male amputee. The experimental setup based on F-Socket Tekscan pressure system is described as well the results of some static loading tests. Pressure data are visualized with a colour pressure map over the 3D model of the residual limb acquired using an optical low cost scanner, based on MS Kinect. Previous methodology is useful to evaluate a physical prototype; in order to improve also conceptual design, the Finite Element (FE) Analysis has been carried and results reached so far have been compared with experimental tests. Pressure distributions are comparable, even if some discrepancies have been highlighted due to sensors placements and implemented FE model. Future developments have been identified in order to improve the accuracy of the numerical simulations.

Topics: Pressure , Design , Prostheses
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In