0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2014;():V01AT00A001. doi:10.1115/DETC2014-NS1A.
FREE TO VIEW

This online compilation of papers from the ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2014) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: Advanced Modeling and Simulation (General)

2014;():V01AT02A001. doi:10.1115/DETC2014-34061.

In this paper, we provide a numerical framework to calculate the relative viscosity of a suspension of rigid particles. A high-resolution background grid is used to solve the flow around the particles. In order to generate infinite number of particles in the suspension, a particle is placed in the center of a cubic cell and periodic boundary conditions are imposed in two directions. The flow around the particle is solved using the second-order accurate curvilinear immersed boundary (CURVIB) method [1]. The particle is discretized with triangular elements, and is treated as a sharp interface immersed boundary by reconstructing the velocities on the fluid nodes adjacent to interface using a quadratic interpolation method. Hydrodynamic torque on the particle has been calculated, to solve the equation of motion for the particle and obtain its angular velocity. Finally, relative viscosity of the suspension has been calculated based on two different methods: (1) the rate of the energy consumption and (2) bulk stress-bulk strain method. The framework has been validated by simulating a suspension of spheres, and comparing the numerical results with the corresponding analytical ones. Very good agreement has been observed between the analytical and the calculated relative viscosities using both methods. This framework is then used to model a suspension with arbitrary complex particles, which demonstrates the effect of shape on the effective viscosity.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A002. doi:10.1115/DETC2014-34141.

Wind energy production has been increasing steadily in the past decade. The majority of wind power is generated by horizontal axis wind turbines (HAWT). We investigate the modeling of the HAWT using the vortex panel method, which is an inviscid, steady, computationally inexpensive method. Pressure coefficient profiles, calculated by the vortex panel method, were compared to NREL phase VI wind turbine experiments under two different flow conditions. We show that if the flow is not separated over the blade, the vortex panel method can capture the pressure profile on the blade. Furthermore, the panel method has been extended to handle unsteady inviscid flows to investigate the effect of blade oscillations on the power generation, which is not known. The unsteady behavior is modeled by accounting for the time rate of change of circulation. Unsteady effects due to heaving and pitching motion were quantified for different blade oscillation frequencies. It is estimated that the mean thrust coefficient with heaving and pitching motion can be higher than the thrust generated without blade motion in some cases assuming that the flow does not separate.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A003. doi:10.1115/DETC2014-34500.

Structured background mesh with regular shaped uniform elements is easy to generate automatically. Analysis using such a mesh is possible with an independent CAD model representing the geometry by using Implicit Boundary Method. This avoids mesh generation difficulties and enables new types of elements that use B-spline approximations instead of the traditional Lagrange interpolations. But the background mesh should ideally have higher resolution in areas where the solution has large gradients. In this paper, T-spline basis functions are used to locally refine a structured mesh. T-splines allow construction of elements with T-junctions maintaining the continuity of the field variable across element boundaries and hence ensure that compatibility conditions are satisfied. T-spline elements are constructed for linear and cubic basis. Essential boundary conditions are applied using implicit boundary method, which allows boundary conditions to be imposed even when there are no nodes on the boundary. T-spline elements are shown to represent constant and linear solution exactly. The mesh refinement technique is evaluated using two dimensional elasticity problem involving stress concentration.

Topics: Splines
Commentary by Dr. Valentin Fuster
2014;():V01AT02A004. doi:10.1115/DETC2014-34537.

The 3D mass evacuation simulation of an airplane accident is experimentally verified. Evacuee motion has been experimentally investigated by building a test field that emulates the interior of an actual regional airliner with a capacity of approximately 100 passengers. The experiment results indicate that the evacuation time tends to be affected by the number of passengers and the evacuee guidance at the emergency exit. The results also indicate that any evacuation delay in exiting by individual passengers only slightly affects the total evacuation time because of evacuee congestion in the aisles. Moreover, the importance of evacuation guidance notification was investigated based on the evacuation-order variance. Finally, the experimental results were compared to the corresponding simulation results. Simulations using appropriate evacuee walking speeds can provide valid evacuation times, which are the most important factor in designing evacuation drills. Consequently, these results should be applied to existing 3D simulations using precise KDH models for more accurate mass evacuation/rescue simulations.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A005. doi:10.1115/DETC2014-34811.

Sustainability evaluation is an important activity for product designers to make “green” decisions when applying the rules for “Design for Sustainability”. It allows the designers to have a quantifiable conception of sustainability about his/her design in the real time. The successful evaluation mechanisms necessitate a closed and complete information integration between the product design methodology and sustainability evaluation techniques. In this paper, we propose an integration framework, which brings the environmental impact assessment into the early product design stages for design sustainability analysis. An information model, which is based on the current CAD systems, has been developed to link the essential components in the framework for building a successful green product design system. A case study that simulates a part design scenario has been given to demonstrate the use of the proposed information model.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A006. doi:10.1115/DETC2014-35097.

The primary objective of this paper is to design an aerodynamically efficient delta kite that can balance a given load when subjected to aerodynamic loads at a specific angle of attack (α) and altitude (h). The aerodynamic forces calculated based on the potential theory heavily underestimate the forces because of the formation of leading edge vortices on the delta shaped kite. The leading edge vortices are the flow separations caused at the sharp leading edge even for very small angles of attack, which remain attached to the leading edge while the kutta condition is still satisfied at the trailing edge.

Vortex lift and induced drag due to the leading edge vortices are calculated using the Polhamus method. Aerodynamic loads (L and D) have been computed by determining proportionality constants (kp, kv) using the standard Multhopp lifting surface theory (MLST). The variation of (CL) and (ΔCD) for delta kites with aspect ratio A=1, 1.5 and 2 has been examined and validated by comparing the computed CL and ΔCD values to experimental measurements. A set of vertical and horizontal loads (WT,HT) that can be balanced by delta kite at operational heights h=140, 150 and 160 m and α=8° has been successfully determined using this method. The method will be applied for the design and control of a delta kite, which will be used to harvest wind energy at high altitudes.

Topics: Suction , Design
Commentary by Dr. Valentin Fuster
2014;():V01AT02A007. doi:10.1115/DETC2014-35100.

Modal analysis is widely used for linear dynamic analysis of structures. The finite element method is used to numerically compute stiffness and mass matrices and the corresponding eigenvalue problem is solved to determine the natural frequencies and mode shapes of vibration. Implicit boundary method was developed to use equations of the boundary to apply boundary conditions and loads so that a background mesh can be used for analysis. A background mesh is easier to generate because the elements do not have to conform to the given geometry and therefore uniform regular shaped elements can be used. In this paper, we show that this approach is suitable for modal analysis and modal superposition techniques as well. Furthermore, the implicit boundary method also allows higher order elements that use B-spline approximations. Several test examples are studied for verification.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A008. doi:10.1115/DETC2014-35140.

Currently a variety of approaches are used to match stress concentration factors with corrosion pit geometry. The majority of these approaches use standardized stress concentration factors, such as concentration factors for circles or ellipses, to estimate the maximum stress values along the pit front. These factors are based on regular geometric shapes. Pits that form in a microstructure are influenced by the individual grains surrounding the pit. These pits often do not have simple shapes. Use of standardized geometric factors do not capture the geometric complexity of the pit. Rather than a single parameter, such as a the aspect ratio of an ellipse, multiple parameters may be required to define the extent and variation in localized curvature along a pit front within a microstructure. Maximum depth and curvature are just two possible candidate metrics. In addition the authors looked to the medical field for potential metrics to adequately describe the convoluted nature of the pit front. Several methods have been developed to mathematically define the serpentine twists in diseased retinal blood vessels. In this work the authors present a methodology for determining characteristics including tortuosity of computationally predicted pit shapes embedded in microstructures. Ultimately it is hoped that maximum curvature, pit tortuosity and other geometric based metrics can be combined to predict the maximum rise in stress associated with a pit embedded in a microstructure.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2014;():V01AT02A009. doi:10.1115/DETC2014-35148.

A high order continuous solution is obtained for partial differential equations on non-rectangular and non-continuous domain using Bézier functions. This is a mesh free alternative to finite element or finite difference methods that are normally used to solve such problems. The problem is handled without any transformation and the setup is direct, simple, and involves minimizing the error in the residuals of the differential equations along with the error in the boundary conditions over the domain. The solution can be expressed in polynomial form. The effort is same for linear and nonlinear partial differential equations. The procedure is developed as a combination of symbolic and numeric calculation. The solution is obtained through the application of standard unconstrained optimization. A constrained approach is also developed for nonlinear partial differential equations. Examples include linear and nonlinear partial differential equations. The solution for linear partial differential equations is compared to finite element solutions from COMSOL.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A010. doi:10.1115/DETC2014-35230.

Laminate composites are widely used in automotive, aerospace, medical, and increasingly in consumer industries, due to their reduced weight, superior structural properties and cost-effectiveness. However, structural analysis of complex laminate structures remains challenging. 2D finite element methods based on plate and shell theories may be accurate and efficient, but they generally do not apply to the whole structure, and require identification and preprocessing (dimensional reduction) of the regions where the underlying assumptions hold. Differences in and limitations of theories for thin/thick plates and shells further complicate modeling and simulation of composites. Fully automated structural analysis using 3D elements with sufficiently high order basis functions is possible in principle, but is rarely practiced due to the significant increase in computational integration cost in the presence of a large number of laminate plies.

We propose to replace the actual layup of the laminate structure by a simplified material model, allowing for a substantial reduction of the computational cost of 3D FEA. The reduced model, under the usual assumptions made in lamination theory, has the same constitutive relationship as the corresponding 2D plate model of the original laminate, but requires only a small fraction of computational integration costs in 3D FEA. We describe implementation of 3D FEA using the reduced material model in a meshfree system using second order B-spline basis functions. Finally, we demonstrate its validity by showing agreement between computed and known results for standard problems.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A011. doi:10.1115/DETC2014-35432.

Heat transfer augmentation techniques have gained great importance in different engineering applications to deal with thermal management issues. In this work, a numerical investigation was carried out to see the effects of a modified surface on the heat transfer enhancement compared to a smooth surface. In the first case, spherical dimple arrays were applied to the surface. The effects were observed for dimples on the bottom wall of a channel for a laminar airflow. The effects of a 21×7 staggered array and a 19×4 inline array on the bottom wall were investigated. In the second case, the heat exchange enhancement in a rectangular channel using longitudinal vortex generators (LVG) for a laminar flow was considered. In both cases, a 3D steady viscous computational fluid dynamics package with an unstructured grid was used to compute the flow and temperature field. The heat transfer characteristics were studied as a function of the Reynolds number based on the hydraulic diameter of the channel. The heat transfer was quantified by computing the surface averaged Nusselt number. The pressure drop and flow characteristics were also calculated. The Nusselt number was compared with that of a smooth channel without surface modification to assess the level of heat transfer enhancement.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A012. doi:10.1115/DETC2014-35476.

The right ventricle (RV) pumps the de-oxygenated blood to the lungs for oxygen absorption. Characterizing the RV geometry, its motion, and the ventricular flow is critical in assessing the heart’s health to provide important clinical diagnostic and prognostic information. However, RV flow has not been observed as closely as the flow in the left side of the heart. The current imaging techniques are limited in their ability to characterize the three-dimensional flow of blood through the heart. There is no single experimental technique available today capable of comprehensively quantifying the 3D flow pattern of blood in heart. As a result, there exists a need for computer simulations in order to understand the complex 3D flow pattern in the heart. In this paper, the sharp-interface immersed boundary method was used to carry out simulations of the flow in a simplified RV model. The reconstructed geometry of the RV was approximated to have a crescent-shape cross-section. In contrast to the previous work, in which the atrium was ignored, the atrium was added with an almost spherical shape attached above the RV. The RV motion was prescribed based on a model that produces physiologic flow waveforms for the RV. The simulations show a complex swirling flow pattern in the RV and the formation of a vortex ring during diastole.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A013. doi:10.1115/DETC2014-35658.

In many European countries, the aging population and the consequent increase in the incidence of chronic disease is causing challenges to health care systems. One solution to avoid a collapse of the systems is to conduct patient care in home environments by informal caregivers. However, constantly caring and monitoring patients may lead to heavy physical and emotional burdens on those informal caregivers. To cope with this problem, this research presents a homecare platform to partially relieve those burdens. Using community-based co-design method, the requirements of the platform are generated first where the compatibility, portability, modularity, accessibility, usability, security, affordability and scalability are addressed. Based on those requirements, an architecture of the platform is constructed where a pervasive computing homecare environment and a web-based service form the core of the platform. In the proposed pervasive computing homecare environment, activities and locations of the patient are recorded as events via a wireless sensor network. Those events are then sent and stored in a web database. Possible critical situations are identified based on the analysis of those recorded events. If any critical situations are detected, the platform will push an alarm to mobile devices of responsible caregivers for possible interventions. To verify the effectiveness and efficiency of the proposed platform, an experiment were conducted to test different technical functionalities and the usability of the platform. Limitations of the proposed platform and future research directions are discussed as well.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: Computational Multiphysics Applications

2014;():V01AT02A014. doi:10.1115/DETC2014-34086.

High speed aerostatic spindles operating at a speed up to 200,000 r/min are a complex product with a multi-physics nature resulted from embedded mechanical-thermal-fluidic-electromagnetic fields. It is much needed to have a comprehensive analysis on the multi-physic interactions within a high speed aerostatic spindle, which is essential for design of the spindles working at much higher speeds and accuracy in various increasingly stringent engineering conditions. This paper presents a multi-physics integrated modelling approach for design and analysis of the high speed aerostatic spindle, including thermal, electromagnetic, mechanical and fluidic analysis models. The heat source, heat transfer mechanism and heat sinks of the spindle system are comprehensively investigated. Furthermore, air film pressure distribution is studied to lead to optimal design and analysis of loading capacity and stiffness of the aerostatic bearings. The multi-physics modelling is implemented using the CFD-FEA integrated approach and validated experimentally. It is shown that the multi-physics integrated modelling is able to simulate the performance characteristics of the spindle system accurately.

Topics: Physics , Simulation , Design
Commentary by Dr. Valentin Fuster
2014;():V01AT02A015. doi:10.1115/DETC2014-34385.

The focus of this paper is on thermo-elastic topology optimization where the structure is subject to both mechanical and thermal loads. Such problems are of significant importance, for example, in the aircraft industry where structures subject to aerodynamic forces and thermal-gradients must be optimized.

A popular strategy for solving such problems is Solid Isotropic Material with Penalization (SIMP) where pseudo-densities serve as optimization parameters. Yet another strategy is the Rational Approximation of Material Properties (RAMP) that overcomes some of the deficiencies of SIMP. Both methods fundamentally rely on parameterization of the material properties as a function of the pseudo-densities.

Here we consider an alternate level-set approach that relies on the concept of topological sensitivity. The advantages of the proposed method over SIMP and RAMP are: (1) ad hoc material parameterization is not required (2) the stresses are well-defined at all points within the evolving topology and (3) the underlying stiffness matrices are always well-conditioned. The proposed method is illustrated through numerical experiments.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A016. doi:10.1115/DETC2014-35002.

In order to assess the feasibility and performance of a minimal multiphysics model for representing the spatiotemporal evolution of biofouling process, we selected the coupled diffusive generalization of the Lotka-Volterra PDEs to govern the spatiotemporal evolution of population densities of predator-prey colonies in a computational domain. The implementation of the finite element solution of the system was performed and the associated numerical solution of the system was achieved. An analysis was performed that highlights certain choices of the control parameters of the model and their effect on the spatiotemporal behavior of the system. Potential extensions of the model are presented to incorporate an agent that inhibits antibiofouling and study its effect. An evaluation of the features of the model are concluding the paper.

Topics: Biofouling
Commentary by Dr. Valentin Fuster
2014;():V01AT02A017. doi:10.1115/DETC2014-35180.

In an effort to address the validation of a recently developed multifield and multiscale rough contact theory we are applying it for a particular experiment. The experiment involves the contact between two hollow cylinders with an annular disk in between them. The contact surface is rough and the entire stack is exposed to compressive mechanical load and a high electric current pulse. Solving the necessary multi-physics partial differential equations leads to establishing the spatiotemporal distribution of relevant fields and the identification of the contact resistance as a function of mechanical pressure and current. In addition to providing typical results for all selected fields present during the experiment and the simulation, we also provide a comparison between the experimentally acquired resistance histories with the numerically derived ones to address validation aspects of the general multiphysics contact theory.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A018. doi:10.1115/DETC2014-35401.

Biofouling is a process of major concern on naval vessels because it considerably affects their performance, maintenance and operational costs due to the fact that induces an increased hydrodynamic drag that leads to higher fuel consumption that in turn demands expensive cleaning procedures. A possible antibiofouling system can be designed by enhancing an existing impressed current cathodic protection system and taking advantage of the chlorine oxidants produced during its operation. In this work we present a design methodology for such a system, together with the associated multiphysics formulation framework based on a coupled chemical reactions — electric currents, species mass transport and electromigration model. This framework predicts the spatio-temporal distributions of the Chlorine species concentration that tend to inhibit the biofouling formations. We also demonstrate the applicability of the computational framework on a number of platforms ranging from simple panels up to a full scale boat. The computational results are compared with the actual field experiments.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: High Performance Computing, With Emphasis on Advanced Computing Architectures and Cloud Computing

2014;():V01AT02A019. doi:10.1115/DETC2014-34069.

In this paper, a periodic boundary condition is implemented for 3D unsteady finite volume solver for incompressible Navier-Stokes equations on curvilinear structured grids containing moving immersed boundaries of arbitrary geometrical complexity. The governing equations are discretized with second-order accuracy on a hybrid staggered/non-staggered grid layout. The discrete equations are integrated in time via a second-order fractional step method. To resolve all the relevant scales in the flow accurately, a high-resolution curvilinear mesh is required, i.e., the simulations are computationally expensive. Therefore, high-performance parallel computing is essential to obtain results within reasonable time for practical applications. The main challenge with the implantation of the parallel periodic boundary condition is to update information at ghost nodes on different processors. An efficient parallel algorithm is implemented to update the ghost nodes for the periodic boundary condition. The parallel implementation is tested by comparing the results with analytical solutions, which are found to be in excellent agreement with each other. The parallel performance of the solver with the periodic boundary condition is also investigated for different cases.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A020. doi:10.1115/DETC2014-34387.

In this paper, we develop a fast time-stepping strategy for the Newmark-beta method; the latter is used extensively in structural dynamics. In particular, we speed up the repeated inversion of the linear systems in the Newmark-beta method by implementing and merging four distinct but complementary concepts: (1) voxelization, (2) assembly-free finite element analysis, (3) deflated conjugate gradient, and (4) adaptive local refinement. The resulting assembly-free deflated conjugate gradient (AF-DCG) version of the Newmark-beta is well-suited for large-scale problems, and can be easily ported to multi-core architectures. Numerical experiments demonstrate the efficacy of the proposed method.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A021. doi:10.1115/DETC2014-34393.

The primary computational bottle-neck in finite element based solid mechanics is the solution of the underlying linear systems of equations. Among the various numerical methods for solving such linear systems, the deflated conjugate gradient (DCG) is well-known for its efficiency and simplicity.

The first contribution of this paper is an extension of the “rigid-body agglomeration” concept used in DCG to a “curvature-sensitive agglomeration”. The latter exploits, classic Kirchoff-Love theory for plates and Euler-Bernoulli theory for beams. We demonstrate that curvature-sensitive agglomeration is much more efficient for highly ill-conditioned problems arising from thin structures.

The second contribution is a strategy for a limited-memory assembly-free DCG where neither the stiffness matrix nor the deflation matrix is assembled. The resulting implementation is particularly well suited for large-scale problems, and can be easily ported to multi-core architectures. For example, we show that one can solve a 50 million degree of freedom system on a single GPU card, equipped with 3 GB of memory.

Topics: Solid mechanics
Commentary by Dr. Valentin Fuster
2014;():V01AT02A022. doi:10.1115/DETC2014-34937.

Full field measurement methods require digital image processing algorithms to accomplish centroid identification of components of the image of a deforming structure and track them through subsequent video frames in order to establish displacement and strain measurements. Unfortunately, these image processing algorithms are the most computationally expensive tasks performed in such methods. In this work we present a set of new algorithms that can be used to identify centroids of image features that are shown to be orders of magnitude faster than conventional algorithms. These algorithms are based on employing efficient data structures and algorithmic flows tailored to optimally fit in shared memory parallel architectures.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A023. doi:10.1115/DETC2014-35676.

As the maximum speed for processors seems to have reached its peak, parallelization becomes an important way to speed up the simulation. This paper discusses the problems of parallelizing the task graphs that represent the system-level physical models at the equation level. The methods for estimating the runtimes of the equation solving tasks are presented. There is a focus on an interactive tool which is designed to parallelize the task graph semi-automatically and to generate the parallel simulation code automatically. For further research, this tool can be used to test the automatic parallelizing algorithms. Several example models are parallelized. The parallel programs are generated and experimented. The final results suggest that parallelization at the equation level is effective and better performances are supposed to be found in bigger systems.

Topics: Simulation
Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: Inverse Problems in Science and Engineering

2014;():V01AT02A024. doi:10.1115/DETC2014-34265.

The deformation control is an important design problem in the stiffness design of structures and it also enables to give a function to the structures. This paper proposes a non-parametric, or a node-based shape optimization method based on the variational method for controlling the static deformation of spatial frame structures. As the objective functional, we introduce the sum of squared error norms to the desired displacements on specified members. Under the assumption that each member varies in the out-of-plane direction to the centroidal axis, the shape gradient function and the optimality conditions are theoretically derived. The shape gradient function is applied to a gradient method in a function space with a Laplacian smoother. With this method, an optimal free-form frame structure with smoothness can be identified for a desired static deformation. The validity and effectiveness were verified through design examples.

Topics: Deformation , Shapes
Commentary by Dr. Valentin Fuster
2014;():V01AT02A025. doi:10.1115/DETC2014-34572.

The present paper describes a method finding bead shapes in shell structure to increase the stiffness using a solution to shape optimization method. Variation of the shell structure in out-of-plane direction is chosen as a non-parametric design variable. To create beads, the out-of-plane variation is restricted by using the sigmoid function. Mean compliance is used as objective function. The main problem is defined as a linear elastic problem for shell structure. The Fréchet derivative with respect to the out-of-plane variation of the mean compliance is evaluated with the solutions of the main problem and an adjoint problem which is derived theoretically by the adjoint variable method. To solve the bead design problem, an iterative algorithm based on the H1 gradient method is used. Numerical results show the effectiveness of the method.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A026. doi:10.1115/DETC2014-35274.

Optimization-based solutions to inverse problems involve the coupling of an analysis model, such as a finite element model, with a numerical optimization method. The goal is to determine a set of parameters that minimize an objective function that is determined by solving the analysis model. In this paper, we present an approach that dramatically reduces the computational cost for solving this inverse problems in this way by replacing the original full order finite element model (FOM) with a reduced order model (ROM) that is both accurate and quick to compute. The reduced order model is constructed with basis functions generated using proper orthogonal decomposition of set of solutions from the FOM. A discrete Galerkin method is used to project the differential equation on the basis functions. This approach allows us to transform the linear full order finite element model into an equivalent discrete ROM with far fewer unknowns. The method is applied to a parameter estimation problem in heat transfer. Specifically, we determine the parameters governing the magnitude and distribution of an unknown surface heat flux moving at a constant velocity across the surface of a solid bar of material. A finite element model was implemented in the commercial package COMSOL and a corresponding ROM was constructed. The ROM was coupled with an optimization algorithm to determine the parameter values that minimized the distance between the computed surface temperatures and the target surface temperature. The target surface temperature was generated using simulated measurements produced from the full order finite element model. Several optimization methods were used. The results show the approach can recover the parameters with high accuracy with twenty seven FOM runs.

Topics: Heat flux
Commentary by Dr. Valentin Fuster
2014;():V01AT02A027. doi:10.1115/DETC2014-35365.

In order to reduce the demanding computational requirements for the numerical solution of problems involving heat transfer problems of moving heat source deposition, we present an approach utilizing reduced order models based on proper orthogonal decomposition and associated Galerkin projection. We subsequently describe the finite element implementation of solution methodology for both the full order and the reduced order models, as well as the respective computational implementation details. Using this methodology, we performed a sensitivity analysis for a problem of a moving heat source to investigate the performance characteristics of the relevant reduced order model size and present the efficiency of the approach. We demonstrated the efficiency of the reduced models for performing inverse analysis.

Topics: Heat
Commentary by Dr. Valentin Fuster
2014;():V01AT02A028. doi:10.1115/DETC2014-35637.

Existing friction laws for rubber like materials are tuned on available experimental data. Once their parameters are identified, a sensitivity analysis is carried out in order to check their extrapolation and prediction capabilities. It is seen that, although several fiction laws at micro scale are available in the literature, neither one is able to correctly predict the friction law at macro scale for all the tire working conditions. In the present paper a thorough review of the most advanced local friction models, i.e. Persson and Kluppel models, is carried out. Persson’s model is then integrated with a limiting criterion and an adhesive contribution is added to improve the prediction of the friction law at macro scale.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: Modeling and Simulation of Humans in Engineering

2014;():V01AT02A029. doi:10.1115/DETC2014-34192.

Metabolic energy expenditure (MEE) is commonly used to characterize human motion. In this study, a general joint-space dynamic model of MEE is developed by integrating the principles of thermodynamics and multibody system dynamics in a joint-space model that enables the evaluation of MEE without the limitations inherent in experimental measurements or muscle-space models. Muscle-space energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained optimization algorithm is used to estimate the model parameters from experimental walking data. The joint-space parameters estimated directly from active subjects provide reliable estimates of the trend of the cost of transport at different walking speeds. The quantities predicted by this model, such as cost of transport, can be used as strong complements to experimental methods to increase the reliability of results and yield unique insights for various applications.

Topics: Dynamic models
Commentary by Dr. Valentin Fuster
2014;():V01AT02A030. doi:10.1115/DETC2014-34232.

Posture prediction is a key component in digital human modeling and simulation. Deterministic optimization-based posture prediction formulations have been proposed. However, there exist uncertainties in human anthropometric (human height, link length, center of mass of segments, and moment of inertia, etc.) and environment parameters (location, interaction force), which affects the predicted posture. This paper attempts to study the effect of uncertainty on predicted posture. The single-loop reliability based design optimization (RBDO) method is adapted to predict posture under uncertainties. All random parameters are assumed to have normal distribution. A 24-degree of freedom (DOF) upper body model is used. In this pilot study, it is assumed that one link length and one joint angle are random parameters. The other design variables and parameters are deterministic. With the empirical rule, three cases are investigated for posture prediction. SNOPT software solver is employed to solve the optimization problem. Through comparison with deterministic optimization result and experimental data, the predicted postures from RBDO simulation show that the reliability index affects the predicted posture to some extent.

Topics: Uncertainty
Commentary by Dr. Valentin Fuster
2014;():V01AT02A031. doi:10.1115/DETC2014-34252.

Successful executions of man-machine systems require consistent human operations and reliable machine performances. Compared with the abundant resources on machine reliability improvements, human-related operational uncertainty that has direct impacts on man-machine systems received little attention. Most studies and formal documentations only provide suggestions to alleviate human uncertainty instead of providing specific methods to ensure operation accuracy in real time. In this paper we present a general framework of a reliable system that compensates for human operating uncertainty in realtime. This system learns the response of an operator, constructs the user’s behavior pattern, and then develop new compensated instructions to ensure the completion of the desired tasks, hereby improve the reliability of the entire man-machine systems. The effectiveness of the proposed framework is demonstrated via the development of an intelligent vehicle parking assist. Existing parking assist systems do not account for drivers’ error; neither do these systems consider realistic urban parking spaces with obstacles. Our proposed system computes a theoretical path once a parking space is identified. Audio commands are then sent to the driver with realtime compensation for a minimal deviations from the path. When an operation is too far away from the desired path to be compensated, new set of instructions will be recomputed with the collected uncertainty. Various real-world urban parking scenarios are used to demonstrate the effectiveness of the proposed method. Our system is able to park a vehicle with a space that is as small as 1.07 times the vehicle length with up to 30% uncertainty. Results also show that the compensation scheme not only allows more diverse operators to achieve a desired goal, but also ensures a higher reliability of meeting such goals.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A032. doi:10.1115/DETC2014-34324.

Human stair ascent and descent are simulated in this work by using a skeletal digital human model with 55 degrees of freedom (DOFs). Hybrid predictive dynamics approach is used to predict the stair climbing motion with weapons and backpacks. In this process, the model predicts joints dynamics using optimization schemes and task-based physical constraints. The results indicated that the model can realistically match human motion and ground reaction forces data during stair climbing tasks. This can be used in human health domain such as leg prosthesis design.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A033. doi:10.1115/DETC2014-35427.

The aim of this study was to measure changing pressures during Tuohy epidural needle insertions for obstetric parturients of various BMI. This has identified correlations between BMI and epidural pressure. Also we investigated links between BMI and the thicknesses and depths of ligaments and epidural space as measured from MRI and ultrasound scans. To date there have been no studies relating epidural pressure and ligament thickness changes with varying Body Mass Indices (BMI).

Further goals following measurement of pressure differences between various BMI patients, were to allow a patient-specific epidural simulator to be developed, which has not been achieved before. The trial has also assessed the suitability of our in-house developed wireless pressure measurement device for use in-vivo. Previously we conducted needle insertion trial with porcine for validation of the measurement system.

Results showed that for each group average pressures during insertion decrease as BMI increases. Pressure measurements obtained from the patients were matched to tissue thickness measurements from MRI and ultrasound scans. The mean Loss of Resistance (LOR) pressure in each group reduces as BMI increases. Variation in the shape of the pressure graphs was noticed between two epiduralists performing the procedure, suggesting each anaesthetist may have a signature graph shape. This is a new finding which offers potential use in epidural training and assessment. It can be seen that insertions performed by the first epiduralist have a higher pressure range than insertions performed by second epiduralist.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: Simulation in Manufacturing

2014;():V01AT02A034. doi:10.1115/DETC2014-34759.

Harmonic drive can have a high nonlinear dynamic behavior. In order to find a way to simulate its operating process and evaluate its performance, this paper gave an introduction of different methods of tooth contact analysis (TCA) and found out that explicit dynamics as a newly used tool for TCA is the most suitable one. Because the harmonic drive’s high contact ratio and uncertainty of contact boundary match with explicit dynamics’ features of explicit algorithm, trajectory detection to deal with contact. A harmonic drive with a new tooth profile has been modeled in Ansys Workbench and solved by explicit dynamics solver. According to the results, the deformation of the simulation model has been compared with the theoretical calculation and experimental observation to make sure that the model reflects the harmonic drive’s elastic behavior correctly. And the nonlinear behavior of the harmonic drive including high contact ratio and output’s hysteresis effect can be predicted by explicit dynamics. So explicit dynamics can offer a new way to simulate harmonic drive’s working process. As a general case of gear drive, this method is possible to be widely adopted by gear industry in the future. Furthermore, the contact ratio and root fillet stress results show that the new tooth profile can significantly reduce the stress concentration to increase the fatigue life of the harmonic drive.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A035. doi:10.1115/DETC2014-34803.

In a time when major technological advancements are happening at incredible rates and where demands for next-generation systems are constantly growing, advancements in failure analysis methods must constantly be developed, as well. Performance and safety are always top concerns for high-risk complex systems, and therefore, it is important for new failure analysis methods to be explored in order to obtain more useful and comprehensive failure information as early as possible, particularly during early design phases when detailed models might not yet exist. Therefore, this paper proposes a qualitative, function-based failure analysis method for early design phases that is capable of not only analyzing potential failure modes for physical components, but also for any manufacturing processes that might cause failures, as well. In this paper, the proposed method is first described in general and then applied in a case study of a proposed design for a nanochannel DNA sequencing device. Lastly, this paper discusses how more advanced and detailed analyses can be incorporated into this approach during later design phases, when more failure information becomes available.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A036. doi:10.1115/DETC2014-34873.

Manufacturing systems, such as the flexible manufacturing system (FMS), have recently changed to accommodate the production of various types and volumes of products. In the FMS, however, it is difficult to change the layout of the factory and solve problems that arise. Therefore, the importance of automated guided vehicles (AGVs) is increasing because they can flexibly respond to changes in facilities and factory layouts. However, there have not been any studies that take into account the many indefinite and accidental elements regarding AGV systems. Applications of knowledge in one domain to a different domain have been drawing much attention. Such activity is called a mimetic solution. We investigated applying the knowledge of traffic engineering regarding passenger transport to conveyance of AGVs. We propose an autonomous conveyance system for AGVs based on taxi transportation strategies to solve indefinite and accidental problems. The system focuses on applying traffic engineering knowledge regarding a flexible taxi system. A taxi is a transport unit in a traffic system involving high flexibility in traveling routes and arrival/departure points.

We also applied the waiting mode of taxis at stations where AGVs pick up and drop off products (P/Ds) as AGV rules to our system and investigated the system’s effectiveness and adaptability to schedule changes in the factories. To adopt a waiting mode as AGV rules, we determine the arrival/departure points that AGVs wait and changing the number of AGVs and product intervals. In addition, collision between AGVs must be considered. If a collision occurs, we have to change the factory schedule. Therefore, we took into consideration AGV collisions. We estimated the matching time, conveyance efficiency, and number of approaches as assessment functions. The matching time is the period between when a load is generated and received, conveyance efficiency is the ratio of total distance to distance traveled while empty, and approaching AGVs denotes the risk of collision. Specifically, we discuss the effect of the number of AGVs on these parameters by considering the ratio of the number of AGVs to the number of P/Ds. We demonstrated that our system is effective in terms of high conveyance efficiency and reducing the number of approaches without decreasing matching time and with a suitable number of AGVs.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A037. doi:10.1115/DETC2014-34986.

The dimensional accuracy of finished ceramic components depends upon the precise control of the unfired ceramic body prior to sintering. One approach for creating precise geometries is the fugitive phase approach. In the fugitive phase approach, the fugitive phase is a sacrificial material that can be removed to form channels in the finished ceramic component. In this paper, the authors computationally examine the fugitive phase approach; in particular, the lamination step of the fugitive phase approach is modeled. In the lamination step the unfired ceramic phases are combined with the fugitive phases through the application of pressure. For this research, the unfired ceramic phase consists of tape cast mullite and the fugitive phase is paper. These phases are laminated together in a die press to form a multilayer material. The compression of the die press causes pressure gradients, viscoelastic deformation, and rebounding of the unfired ceramic phases. In addition, the die press can cause movement of the fugitive phase pieces leaving unfilled voids. Three dimensional modeling is necessary to accurately capture the movement of the fugitive phase pieces. In this work the authors examine the viscoelastic deformation of the unfired ceramic phase, movement of the fugitive phase, the creation and filling of voids, pressure gradients, and the rebounding that occurs when the unfired ceramic body is removed from the die press. The information obtained from computational simulations will be used to help direct experimental investigations of the fugitive phase approach for fabrication of complex ceramic components.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A038. doi:10.1115/DETC2014-35136.

To achieve a precise and controlled laser process, a thorough analysis of the thermal behavior of the material is necessary. The knowledge of the thermal cycles is important to ascertain suitable processing parameters, thus improving surface properties when the alloys are laser irradiated. In the present paper, a numerical simulation of the laser hardening process has been developed using the finite element (FE) code ABAQUS™ to solve the heat transfer equation inside the treated material (AISI 4140 steel). The thermal analysis is based on Jaeger’s classical moving heat source method by considering the laser beam as a moving plane (band/disc) heat source and the target material is a semi-infinite solid. However, the FE model, used to solve the governing equation, does not directly accommodate the moving nature of heat source. A reasonable approximation is to divide the laser travel on the substrate into many small time/load steps, and apply variable flux and boundary conditions in each time/load step. This approximates the quasi-steady state phenomena over the series of these time steps for the complete laser travel. This paper investigates the effects of the choice of time/load steps on the temperature evolution as well the computing times in the process.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS: Uncertainty Quantification in Simulation and Model Verification and Validation

2014;():V01AT02A039. doi:10.1115/DETC2014-34051.

This paper introduces a novel approach for reliability assessment with dependent variables. In this work, the boundary of the failure domain, for a computational problem with expensive function evaluations, is approximated using a Support Vector Machine and an adaptive sampling scheme. The approximation is sequentially refined using a new adaptive sampling scheme referred to as generalized “max-min”. This technique efficiently targets high probability density regions of the random space. This is achieved by modifying an adaptive sampling scheme originally tailored for deterministic spaces (Explicit Space Design Decomposition). In particular, the approach can handle any joint probability density function, even if the variables are dependent. In the latter case, the joint distribution might be obtained from copula. In addition, uncertainty on the probability of failure estimate are estimated using bootstrapping. A bootstrapped coefficient of variation of the probability of failure is used as an estimate of the true error to determine convergence. The proposed method is then applied to analytical examples and a beam bending reliability assessment using copulas.

Topics: Reliability
Commentary by Dr. Valentin Fuster
2014;():V01AT02A040. doi:10.1115/DETC2014-34543.

The Kalman filter has been widely applied for state identification in controllable systems. As a special case of hidden Markov model, it is based on the assumption of linear dependency relationships and Gaussian noises. The classical Kalman filter does not differentiate systematic error from random error associated with observations. In this paper, we propose an extended Kalman filtering mechanism based on generalized interval probability, where systematic error is represented by intervals, state and observable variables are random intervals, and interval-valued Gaussian distributions model the noises. The prediction and update procedures in the new mechanism are derived. A case study of auto-body side frame assembly is used to illustrate the developed mechanism.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A041. doi:10.1115/DETC2014-35010.

Up to date, model and parameter uncertainties are generally overlooked by majority of researchers in the field of battery study. As a consequence, accuracy of the SOC estimation is dominated by the model fidelity and may vary from cell-to-cell. This paper proposes a systematic framework with associated methodologies to quantify the battery model and parameter uncertainties for more effective battery SOC estimation. Such a framework is also generally applicable for estimating other battery performances of interest (e.g. capacity and power capability). There are two major benefits using the proposed framework: i) consideration of the cell-to-cell variability, and ii) accuracy improvement of the low fidelity model comparable to the high fidelity without scarifying computational efficiency. One case study is used to demonstrate the effectiveness of the proposed framework.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A042. doi:10.1115/DETC2014-35126.

Reliable simulation protocols supporting integrated computational materials engineering requires uncertainty to be quantified. In general, two types of uncertainties are recognized. Aleatory uncertainty is inherent randomness, whereas epistemic uncertainty is due to lack of knowledge. Aleatory and epistemic uncertainties need to be differentiated in validating multiscale models, where measurement data for unconventionally very small or large systems are scarce, or vary greatly in forms and quality (i.e. sources of epistemic uncertainty). In this paper, a recently proposed generalized hidden Markov model is used for cross-scale and cross-domain information fusion under the two types of uncertainties. The dependency relationships among the observable and hidden state variables at multiple scales and physical domains are captured using generalized interval probability. The update of imprecise credence and model validation are based on a generalized interval Bayes’ rule. Its application in molecular dynamics simulation for irradiation of Fe is demonstrated.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: AMS/SEIKM: Design and Simulation for Additive Manufacturing

2014;():V01AT02A043. doi:10.1115/DETC2014-34067.

Additive manufacturing processes are capable of printing parts with any shape and complexity. The parts fabricated with additive manufacturing processes requires minimum human intervention. Process planning decisions play an important role in making sure the fabricated parts meets the desired specification, including the build time and cost. A quick and unified approach to quantify the manufacturing build time, accuracy, and cost in real time is lacking. In the present research, a generic and near real-time framework for unified additive manufacturing process planning is presented. We have developed computational geometric solutions to estimate tight upper bound of manufacturing process planning decisions that can be analyzed in almost real time. Results of developed computational approach are compared against the optimized process plans to ensure its applicability. Case studies comprising of numerous parts with varying shape, and application area is also outlined.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A044. doi:10.1115/DETC2014-34222.

Additive manufacturing, or 3d printing, is the process of building three dimensional solid shapes by accumulating material laid out in sectional layers. Additive manufacturing has been recognized for enabling production of complex custom parts that are difficult to manufacture otherwise. However, the dependence on build orientation and physical limitations of printing processes invariably lead to geometric deviations between manufactured and designed shapes that are usually evaluated after manufacture. In this paper, we formalize the measurement of such deviations in terms of a printability map that simulates the printing process and partitions each printed layer into disjoint regions with distinct local measures of size. We show that manufacturing capabilities such as printing resolution, and material specific design recommendations such as minimal feature sizes may be coupled in the printability map to evaluate expected deviations before manufacture. Furthermore, we demonstrate how partitions with size measures below required resolutions may be modified using properties of the medial axis transform, and use the corrected printability map to construct a representation of the manufactured model. We conclude by discussing several applications of the printability map for additive manufacturing.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A045. doi:10.1115/DETC2014-34383.

A salient feature of additive manufacturing is that the cost of fabrication, to a large extent, is independent of geometric complexity. This opens new opportunities for custom-designing parts both at a macro and micro-level. An elegant and powerful method of designing custom-parts is through topology optimization.

While the theory of topology optimization is well understood, current methods can be extraordinarily expensive. The focus of this paper is on efficient microstructural topology optimization for 3d-printing. In particular, the computational bottle-necks in microstructural topology optimization are identified. Then, a framework that not only eliminates these bottle-necks, but incorporates other significant improvements, is developed. The framework is demonstrated through numerical experiments involving microstructures with millions of degrees of freedom, using multi-core CPUs and NVidia GPU.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A046. doi:10.1115/DETC2014-34463.

The left ventricle (LV) of a human heart receives oxygenated blood from the lungs and pumps it throughout the body via the aortic valve. Characterizing the LV geometry, its motion, and the ventricular flow is critical in assessing the heart’s health. An automated method has been developed in this work to generate a three-dimensional (3D) model of the LV from multiple-axis echocardiography (echo). Image data from three long-axis sections and a basal section is processed to compute spatial nodes on the LV surface. The generated surfaces are output in a standard format such that it can be imported into the curvilinear-immersed boundary (CURVIB) framework for numerical simulation of the flow inside the LV. The 3D LV model can be used for better understanding of the ventricular motion and the simulation framework provides a powerful tool for studying left ventricular flows on a patient specific basis. Future work would incorporate data from additional cross-sectional images.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A047. doi:10.1115/DETC2014-34566.

The recent advances in additive manufacturing technology enable the development of complicated structural systems. For instance, meso-scale lattice structures (MSLS) have significant potential to be applied in light-weight applications. Determining the appropriate designs for these truss-like cellular structures can be a challenging task due to their geometric complexities and prohibitive computational costs in design processes. In this research, a new design method, namely, relative density mapping (RDM) method is developed by utilizing the material density information obtained from topology optimization results in continuum structures. The proposed method utilizes by-product information from topology optimization results to design cellular structures after the topology optimization has been conducted to decide the layout of the structure. Since the proposed method does not require additional optimization or black-box simulation procedures, it can drastically reduce computational costs in the design process of the cellular structures. Moreover, the developed method can be applicable to any complicated shapes of cellular structures. The efficacy of the developed method is compared to existing methods. The presented results demonstrate significant potential for designing truss-like cellular structures based on the by-product information from the topology optimization procedure.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A048. doi:10.1115/DETC2014-34683.

In this paper, a two-step homogenization method is proposed and implemented for evaluating effective mechanical properties of lattice structured material fabricated by the material extrusion additive manufacturing process. In order to consider the characteristics of the additive manufacturing process in estimation procedures, the levels of scale for homogenization are divided into three stages — the levels of layer deposition, structural element, and lattice structure. The method consists of two transformations among stages. In the first step, the transformation between layer deposition and structural element levels is proposed to find the geometrical and material effective properties of structural elements in the lattice structure. In the second step, the method to estimate effective mechanical properties of lattice material is presented, which uses a unit cell and is based on the discretized homogenization method for periodic structure. The method is implemented for cubic lattice structure and compared to experimental results for validation purposes.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A049. doi:10.1115/DETC2014-34908.

Polymerization shrinkage and thermal cooling effect have been identified as two major factors leading to the curl distortion in the Stereolithography (SLA) process. In this paper, the curing temperatures of built layers in the mask image projection based Stereolithography (MIP-SL) process are investigated using a high-resolution infrared (IR) camera. The curing temperatures of built layers using different exposure strategies including varying exposure time, grayscale levels and mask image patterns have been studied. The curl distortions of a test case based on various exposure strategies have been measured and analyzed. It is shown that, by decreasing the curing temperature of built layers, the exposure strategies using grayscale levels and mask image patterns can effectively reduce the curl distortion. In addition to curl distortion control, the curing temperature study also provides a basis for the curl distortion simulation in the MIP-SL process.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A050. doi:10.1115/DETC2014-35170.

In an effort to enable on-demand process control of additive manufacturing processes for achieving component performance by design from a modeling and simulation perspective and context, we introduce a method for identifying relevant modeling and simulation challenges for the purpose of motivating research that addresses this problem. We first present the abstraction of the multiscale modeling processes connecting process control with functional performance both from the forward and inverse perspectives. We subsequently introduce a brief ontology describing the ordering of dependency and membership of all components of a model in order to isolate the potential areas where challenges can be exposed. We subsequently select some features that are usually ignored by the community during modeling. In particular, we demonstrate using a simple problem of mass and heat transfer, which is relevant to layered additive manufacturing, the implications and dangers related to ignoring process dependence on deposition path history.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A051. doi:10.1115/DETC2014-35409.

Though the advanced manufacturing capabilities offered by additive manufacturing (AM) have been known for several decades, industry adoption of AM technologies has been relatively slow. Recent advances in modeling and simulation of AM processes and materials are providing new insights to help overcome some of the barriers that have hindered adoption. However, these models and simulations are often application specific, and few are developed in an easily reusable manner. Variations are compounded because many models are developed as independent or proprietary efforts, and input and output definitions have not been standardized. To further realize the potential benefits of modeling and simulation advancements, including predictive modeling and closed-loop control, more coordinated efforts must be undertaken. In this paper, we advocate a more harmonized approach to model development, through classification and metamodeling that will support model composability, reusability, and integration. We review several types of AM models and use direct metal powder bed fusion characteristics to provide illustrative examples of the proposed classification and metamodel approach. We describe how a coordinated approach can be used to extend modeling capabilities by promoting model composability. As part of future work, a framework is envisioned to realize a more coherent strategy for model development and deployment.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A052. doi:10.1115/DETC2014-35559.

In today’s Additive Manufacturing (AM), a part is typically manufactured using layer by layer addition of material from a Computer Aided Design (CAD) model. Traditionally the CAD model is transferred to RP system after exchanging to Stereo Lithography (STL) format which is triangulated tessellation of the CAD model. Then it is sliced using different slice algorithms and machine constraints. The inherent uncertainties in this process have led to development of adaptive direct slicing technique. There are several adaptive slicing techniques but only few researches have been done to calculate an actual surface error factor and the cost aspect of the slicing algorithm. This paper proposes new adaptive algorithm to compute a surface error factor and to find the cost effective approach for slicing. The adaptive slicing algorithm dynamically calculates slice thickness and it is based on the allowable threshold for surface integrity error to optimize the cost and time. The paper also provides comparative study of previously developed adaptive models by the authors based on cusp height and surface integrity.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: Computer-Aided Product and Process Development (General)

2014;():V01AT02A053. doi:10.1115/DETC2014-34135.

To pursue high performance 5-axis CNC milling in industry, it is crucial to simulate each specific mill process in high fidelity beforehand, which should model the machined surfaces and predict the cutting forces in the process planning. However, the kernel technique, representation of the un-deformed chip geometry removed by cutter in 5-axis milling, is far from mature. Aiming to solve the problem, this paper presents a generic approach to representing un-deformed chip geometry mathematically in 5-axis CNC milling. The unique features of this research are: (1) the machine tool kinematics chain is investigated and a 5-axis CNC interpolation algorithm is adopted to establish the tool kinematics model, and (2) the closed-form equation of the un-deformed chip geometry representation is derived based on the machined shape being the envelope of a group of ellipses. This approach can model a machined surface with high accuracy and efficiently, and can be used to evaluate the machine surface quality and machining parameters. It can greatly promote the technique of high performance 5-axis CNC milling.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A054. doi:10.1115/DETC2014-34187.

In recent years, there has been a significant push towards “Design for X” (DFX) in modern engineering design practice. One such category that has received a large amount of attention is design for manufacturing. When conducting design for manufacturing, a common tool to assist in the design process is design for a series of design for manufacturing guidelines. While the use of these guidelines, as well as other DFX guidelines, has been shown to be effective, little research has been done with the intent to standardize the guidelines or make them more readily available. In this paper, the authors propose a Design for Manufacturing database tool to assist in the instruction of design for manufacturing guidelines. The development of the database model is discussed, as well as the interface that is used to interact with the database. The tool is then evaluated and conclusions are made with regards to the effectiveness of the database and any future work to increase the functionality. One major addition that is discussed is the adaptation of the database for use in industry, and not just in education, to assist in the engineering design process.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A055. doi:10.1115/DETC2014-34329.

In manufacturing process planning, it is critical to ensure that the part generated from a process plan complies with tolerances specified by designers to meet engineering constraints. Manufacturing errors are stochastic in nature and are introduced at almost every stage of executing a plan, for example due to inaccuracy of tooling, misalignment of location, distortion of clamping etc. Furthermore, these errors accumulate or ‘stack-up’ as the manufacturing process progresses to inevitably produce a part that varies from the designed model. The resultant variation should be within prescribed design tolerances. In this work, we present a novel approach for validating process plans using 3D tolerance stack-up analysis by representing variations of nominal features in terms of extents of their degrees of freedom within design and manufacturing tolerance zones. We will show how the manufacturing error stack-up can be effectively represented by composition and intersection of these transformations. We demonstrate several examples with different tolerance specifications to show the applicability of our approach for process planning.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A056. doi:10.1115/DETC2014-34446.

Composites parts manufacture is an increasingly important part of aerospace production. A wide range of techniques are in practice for fabricating composite structures. Structural fillers are commonly used, one of these being honeycomb core. In addition to providing the structural strength and stiffness, they also contribute to reducing weight and can impart other desirable properties such noise and vibration damping, and the ability to absorb impacts from equipment failure. When the shape of the core needs to be accurately controlled, it is fixtured in its expanded form and machined using 5-axis routers with specialized cutters. For large parts such as wing edges or helicopter rotor blades, this requires machines with larger tables to accommodate the size of the work piece. There are definite capital equipment savings, operation and maintenance savings, floor space reductions and possibly machining time reductions to be achieved by machining core in its unexpanded form, when it occupies a fraction of the expanded length. This is difficult to do for all but the simplest surfaces because of challenges predicting and controlling excess material and gouging of the part. The paper motivates the development of CAD/CAM techniques for this domain by introducing approaches for modeling the collapsed core of a part from its expanded form. It further investigates the machining errors that can occur from tool paths generated by common CAM systems using NC verification technology.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A057. doi:10.1115/DETC2014-34492.

The manufacturing industry is moving towards a truly global arena. Organizations are adopting the philosophy of “design anywhere, manufacture anywhere, and sell anywhere”. Global operations with local focus have become the core of an organization’s strategy. Organizations are trying to have a vast product portfolio with mass customization to meet the customers’ increasing demand for personalized products. While expanding the product portfolio and bringing new products to the market the aspect of product sustenance across its life cycle is often missed out. With regulatory standards becoming more stringent, product maintenance and retirement are becoming challenging and costly. The aspect of “circular economy” is extending the life of the product and individual parts beyond the traditional end of life with re-fabrication, reconditioning and recycling of parts. The part-level detailing is becoming very important at the design stage. This provides huge growth opportunities for organizations, but comes with challenges of increased complexity, variety and cost.

One of the potential ways to address the challenges listed above is the availability and maintenance of part-level information and dynamic traceability across the lifecycle, enriched with cross functional inputs. This is important for business decision making during product portfolio planning and product design in both proactive and reactive scenarios. Based on the authors’ industry experience across multiple product development organizations, it is evident that there is limited awareness of the potential of classification and its impact beyond basic part search and reuse. In this paper, we discuss the need for an integrated, cross-functional model and a common database for part information management. We present an agent-based simulation to show the benefits of such an integrated modeling strategy. In the process, the approach has the potential to also bring configurability of the product till the end of life. Configurability is from the aspect of making a product that will perform to meet customer needs along with delivering profit for business and being compliant with various regulatory norms.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A058. doi:10.1115/DETC2014-34510.

Industries often employ heterogeneous computer aided tools (CAD/CAE/CAM) to carry out complex product designs and simulations resulting in a need for data sharing, data exchange, and computational activities. Nowadays the concept of “Design for Sustainability (DfS)” heightens this challenge, as most DfS approaches especially the Life Cycle Assessment (LCA) involve large amounts of data collection, sharing and computation tasks throughout the product life cycle. ISO 10303, also known as STEP (STandard for the Exchange of Product model data), has evolved for several decades and provides a set of standards for industrial automation systems and integration. In this paper, we propose a STEP-based collaborative framework to integrate heterogeneous CAD tools, LCA tools and other necessary computational tools to support cooperative product design for sustainability. The geometric information from CAD tools and the material/process information from material/process databases are formally represented in suitable STEP application protocols (APs). An agent is implemented to parse the geometric and non-geometric information encoded in STEP data format, and compose them into a complete product tree represented with a NIST CPM (Core Product Model) based information model. The information in the product tree is then evaluated by a LCA tool to obtain environmental impact score. The feasibility and benefits of the proposed methodology have been illustrated with a typical stapler product.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A059. doi:10.1115/DETC2014-34563.

Most heterogeneous CAD representations in the literature represent materials using a volume fraction vector, which may not by physically realizable or meaningful. In contrast, the multi-scale, heterogeneous CAD representation presented here models materials using their microstructure. For the specific metal alloys of interest in this work, the material model is a probabilistic model of grain characteristics, represented as cumulative distribution functions. Several microstructure reconstruction algorithms are presented that enable different alloy grain structures to be reconstructed in a part model. Reconstructions can be performed at any desired size scale, illustrating the multi-scale capability of the representation. A part rendering algorithm is presented for displaying parts with their material microstructures. The multi-scale, heterogeneous CAD representation is demonstrated on two Inconel alloys and is shown to be capable of faithfully reconstructing part representations consistent with the probabilistic grain models.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A060. doi:10.1115/DETC2014-34587.

Finite Element Method (FEM1) is pervasively used in most of 3D product design analysis, in which Computer Aided Design (CAD) models need to be converted in to mesh models first and then enriched with some material features and boundary conditions data, etc. The interaction between CAD models and FEM models is intensive. Boundary Element Method (BEM) has been expected to be advantageous in large-scale problems in recent years owing to its reduction of the dimensionality and its reduced complexity in mesh generation. However, the BEM application has so far been limited to relatively small problems due to the memory and computational complexity for matrix buildup are O(N2). The fast multipole BEM (FMBEM) combined with BEM and fast multipole method (FMM) can overcome the defect of the traditional BEM, and provides an effective method to solve the large-scale problem. Combining GPU parallel computing with FMBEM can further improve its efficiency significantly. Based on the three-dimensional elastic mechanics problems, the parallelisms of the multipole moment (ME), multipole moment to multipole moment (M2M) translation, multipole moment to local expansion (M2L) translation, local expansion to local expansion (L2L) translation and near-field direct calculation were analyzed respectively according to the characteristics of the FMM, and the parallel strategies under CUDA were presented in this paper. Three main major parts are included herein: (1) FMBEM theory in 3D elastostatics, (2) the parallel FMBEM algorithm using CUDA, and (3) comparison the GPU parallel FMBEM with BEM, FEM and FMBEM respectively by engineering examples. Numerical example results show the 3D elastostatics GPU FMBEM not only can speed up the boundary element calculation process, but also save memory which can be effective to solve the large-scale engineering problems.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A061. doi:10.1115/DETC2014-34685.

Workflows to produce dental products by using CAD/CAM technology are very complex. Each patient needs an individual restoration. The challenge is to provide a patient individual production aiming at a price of mass production. But every single job has to run through an individual development as well manufacturing process. Typically, three stakeholders are involved in the workflow. The dentist performs the treatment and defines requirements for restoration. The dental laboratory plans the workflow and designs the reconstruction by using a dental CAD system. Subsequently, a milling center produces the restoration. Because of these highly heterogeneous workflows, diverse data streams and incompatibilities result. Often improper partners and resources are involved in the workflow. This fact is a significant source for errors. An additional complication is that errors are often discovered in late phases of the workflow.

To avoid high costs and unacceptable delivery times, the aim is to develop a new concept for integrated workflow planning. The concept depends on three parts: Federative dental data management (FDDM) as a basic approach, including anticipated logic and structured activities. The federative data management provides a loosely coupling of heterogeneous systems crossing enterprise borders by using web technology. The FDDM service depends on APP technology. Each participant applies its specialized APP: FDDMz (dentist), FDDMd (dental laboratory) and FDDMf (milling center). FDDM services enable a continuously integrated workflow throughout the whole process of a patient individual production. Each participating enterprise is able to register its available processes and resources. Information about resources like 3D dental scanner or milling machines are able to add, according to a global data model schema. This schema depends on an integrated information model with eight partial models: Collaboration, resource, process, workflow, requirements, product, work preparation and production model. This integrated information model provides dental information including interlinked objects. Through a proper anticipation logic, conclusions about later phases can be anticipated already at early phases. The last conceptual part is workflow management on frame of structured activities. By combining the information network with the anticipation logic, filtering of appropriate partners, processes, resources and sequences is supported. Next, a prototypical implementation is demonstrated exemplarily. This concept delivers an important contribution to increase process reliability and quality as well as to reduce delivery times and costs for digital dental workflows.

Topics: Workflow
Commentary by Dr. Valentin Fuster
2014;():V01AT02A062. doi:10.1115/DETC2014-34717.

A framework of CAD/CAE integration system and its implementation for dockside container crane are proposed in this paper. First, the system framework based on web technology, software design pattern and service-oriented architecture (SOA) is introduced. Then, requirement input interfaces of Customer-Designer-Interaction (CDI) module are built based on ASP.NET multiple-layer Browser/Server (B/S) architecture, core design patterns and .NET WCF Services, and customers can provide specifications of the cranes to designers. Next, CAD and CAE modules are accomplished using multiple-layer architecture, and designers can parametrically create 3D models of the crane structures and conduct explicit dynamic Finite Element Analysis (FEA) on the designed crane structures. SOA based Design-Analysis-Integration (DAI) is developed to maintain consistence between CAD and CAE models by using .Net WCF Service. Last, system management functions such as user interaction, user account and file management are described. Since all the operations are conducted in Web and SOA context, customers and designers are able to participate in the design process at different geographical locations.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A063. doi:10.1115/DETC2014-34943.

As known, the number of possible disassembly sequences increases significantly with the number of parts in a product. For selective disassembly, for instance, it is important to eliminate the components unrelated with the target component prior to sequence generation. In order to address this configuration, a method for disassembly sequences generation in the case of selective disassembly is presented in this paper. Based on the least levels of disassembly product graph it allows reducing computation resources. Instead of considering the geometric constrains for each pair of components, the proposed method considers the geometric contact and collision relationships among the components in order to generate the disassembly graph for sequences planning. The method is applied for automatically generating the selective disassembly sequences thus allowing reducing the computation resources and is illustrated through an example.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A064. doi:10.1115/DETC2014-35037.

Surrogate models are useful in a wide variety of engineering applications. The employment of these computationally efficient surrogates for complex physical models offers a dramatic reduction in the computational effort required to conduct analyses for the purpose of engineering design. In order to realize this advantage, it is necessary to “fit” the surrogate model to the underlying physical model. This is a considerable challenge as the physical model may consist of many design variables and performance indices, exhibit nonlinear and/or mixed-discrete behaviors, and is typically expensive to evaluate. As a result adaptive sequential sampling techniques, where previous evaluations of the physical model dictate subsequent sample locations, are widely used. In this work, we develop and demonstrate a novel adaptive sequential sampling algorithm for fitting surrogate models of any type, with a focus on large data sets. By examining the monotonicity of an error function the design space is repeatedly partitioned in order to compute a set of “key points.” The key points reduce the problem of fitting to one of precise interpolation, which can be accomplished using well-known methods. We demonstrate the use of this technique to fit several surrogate model types, including blended Hermitian polynomials and Non-Uniform Rational B-splines (NURBs), to nonlinear noisy data. We conclude with our observations as to the effectiveness of this fitting technique, its strengths and limitations, as well as a discussion of further work in this vein.

Topics: Errors , Fittings
Commentary by Dr. Valentin Fuster
2014;():V01AT02A065. doi:10.1115/DETC2014-35124.

Wind energy is one of the fastest growing sectors of renewable energy technologies. Micro-scale wind turbines are becoming increasingly more popular as individuals seek more innovative and efficient ways of reducing their energy demand. However, even with more efficient wind energy harvesting devices, it is not uncommon for the anticipated energy harvesting potential for a wind turbine to be vastly different from the actual energy generation capability for a site. This paper will introduce the problems associated with accurately predicting the energy generation potential of wind energy harvesting systems considering several power loss factors. The focus of this paper is on the use of commonly available engineering tools and simulations to evaluate obstructions on the implementation site and energy harvesting technologies in order to maximize the energy generation potential.

The issues of energy generation prediction accuracy will be addressed by incorporating high-resolution weather data and computer simulations into a predictive model. Additionally, several correction factors and design guidelines for the successful implementation of micro-scale wind turbines will be introduced and discussed. A multi-step approach is introduced to collect weather data, generate revised energy generation potential models for various land regions, account for common site obstructions through the use of correction factors, and formulate a final set of recommendations for the wind turbine implementation site. A set of design guidelines are also developed to assist in turbine placement for specific site locations. The guidelines are based on the revised energy generation prediction model, wind speed correction factors for common site obstructions, orientation and placement of the wind turbine, and other factors. Verification of the guidelines has been performed through Computational Fluid Dynamics simulation. The design approach and guidelines are examined and results presented for one case study.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A066. doi:10.1115/DETC2014-35135.

Computational tools for aiding design-by-analogy have so far focused on function- and keyword-based retrieval of analogues. Given the critical role of performance and benchmarking in design, there is a need for performance metrics-driven analogy retrieval that is currently unmet. Towards meeting this need, a study has been done to investigate and propose frameworks for organizing the myriad technical performance metrics in engineering design, such as measures of efficiency. Such organizational frameworks are needed for the implementation of a computational tool which can retrieve relevant analogies using performance metrics. The study, which takes a deductive approach, defines a hierarchical taxonomy of performance metrics akin to the functional basis vocabulary of function and flow terms. Its derivation follows from bond graphs, control theory, and Design for X guidelines.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V01AT02A067. doi:10.1115/DETC2014-35295.

Tolerance allocation is important aspect in designing as well as manufacturing. Mating features in an assembly are important from the tolerance point of view and govern the tolerance schema. Presence of patterns within these features also plays an important role in the allocation of different tolerance classes. Identification of these assembly features and patterns are previously done manually. This research is aimed at automating these processes. The automation starts with the recognition of the assembly features in the assembly. The algorithms for feature recognition are designed such that they can handle any user defined assembly feature. The input for feature recognition is a STEP file containing information of the assembly. And the output file contains information of the recognized assembly features. Then patterns are identified from these assembly features. This paper discusses these two processes in detail. Also to facilitate the user, define new assembly features an alternate system called assembly feature tutor is developed. This paper also explains the working of this tutor.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A068. doi:10.1115/DETC2014-35384.

Photopolymerization based process is one of the most popular additive manufacturing processes. Two primary configurations for this process are laser based vector by vector scanning (0D) and projection based layer by layer exposing (2D). With the highly focused fine laser, the scanning based process can accomplish very high surface finishing and precision, however, due to the serial nature of scanning, this process suffers from the problem of slow speed. In contrast with laser scanning, projection based process can form the whole layer in one exposure, which leads to higher fabrication efficiency. However, due to the limited resolution of projection device and various optical defects, the surface quality will be significantly deteriorated for large area fabrication. To solve this problem, a novel hybrid process by integrating vector scanning and mask projection has been presented. In this process, laser is focused into a fine spot and used to scan the boundary of the layer, whereas the projector is focused onto a large platform surface and used to form the interior area of the layer. An efficient slicing method is proposed for extracting the contour for laser scanning. A slice to image conversion algorithm is also developed to convert the offset contour to grayscale image for mask projection. Experimental results have verified that the proposed hybrid process can significantly improve the fabrication speed without losing the surface quality.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A069. doi:10.1115/DETC2014-35558.

In the design of surface micromachined microelectromechanical systems (MEMS), there is a lack of effective modeling methods to refine the geometry of a MEMS device. This paper presents a method of incremental geometric modeling and mask synthesis for surface micromachined MEMS. In this method, propagation-mapping graphs are introduced to label all the affected entities in one variation operation, according to the characteristic of surface micromachining. Based on the propagation-mapping graphs constructed from a 3D geometric model and its process model, variation propagation and mapping are used to update these models. In order to keep manufacturability of these models, four manufacturability problems caused by variation propagation are analyzed and a manufacturability maintenance method is discussed. Variation propagation and mapping achieve incremental modeling for the geometric model and process model of a MEMS device. This method enables designers to modify a MEMS device in a quick and intuitive way. Finally the method is implemented and some test results are given.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A070. doi:10.1115/DETC2014-35575.

Understanding the exact details of deviation zone related to a manufactured surface needs measurement of infinite number of points. Coordinate metrology provides deviation of the limited number of discrete points on a measured surface, but typically it is not capable to explore any information of the surface regions between these measured points. An approach to estimate the Distribution of Geometric Deviations (DGD) on the entire inspected surface is presented in this paper. The methodology is developed based on estimation of mean value property of the harmonic functions and Laplace equation. The resulting DGD model can be employed to estimate the deviation values at any unmeasured point of the inspected surface when a detailed understanding of the surface geometric deviations is required. Implementation of the developed methodology is described and case studies for typical industrial parts are presented. This methodology can be used for closed-loop of inspection and manufacturing processes when a compensation scheme is available to compensate the manufacturing errors based on the DGD model.

Topics: Metrology
Commentary by Dr. Valentin Fuster
2014;():V01AT02A071. doi:10.1115/DETC2014-35652.

What is the fundamental similarity between investing in stock of a company, because you like the products of this company, and selecting a design concept, because you have been impressed by the esthetic quality of the presentation made by the team developing the concept?

Except that both decisions are based on a surface analysis of the situations, they both reflect a fundamental human’s cognitive feature. Human brain is profoundly trying to minimize the efforts required to solve a cognitive task and is using when possible an automatic mode relying on recognition, memory, and causality. This mode is even used in some occasion without the engineer being conscious of it. Such type of tendencies are naturally pushing engineers to rush into known solutions, to avoid analyzing the context of a design problem, to avoid modelling design problems and to take decision based on isolated evidences. Those behaviors are familiar to experience teachers and engineers. This tendency is magnified by the time pressure imposed to the engineering design process. Early phases in particular have to be kept short despite the large impact of decisions taken at this stage. Few support tools are capable of supporting a deep analysis of the early design conditions and problems regarding the fuzziness and complexity of the early stage. The present article is hypothesizing that the natural ability of humans to deal with cause-effects relations push toward the massive usage of causal graphs analysis during the design process and specifically during the early phases. A global framework based on graphs is presented in this paper to efficiently support the early stages. The approach used to generate graphs, to analyze them and to support creativity based on the analysis is forming the central contribution of this paper.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: CAPPD: Digital Human Modeling for Engineering Application

2014;():V01AT02A072. doi:10.1115/DETC2014-34118.

In this paper, a general body motion editing system for 3D body motion is constructed. The input of the system accepts any human-like structure according to a specified file designation. The body models are imported using STL format. The user-interface is designed for intuitive 3D manipulation just as operating a video recorder. The software system includes three main functions: motion editing, interpolation, and replication. There are three teaching modes available in world coordinate, local coordinate, and joint coordinate. For editing or retouching existing motions, users can easily choose a key frame to teach, and save the posture as a body motion staff (BMS). Several motion interpolation methods are provided to simulate different body motion characteristics. The BMS between the key frames are generated. The BMS can be used to edit and analyze body motions just like the musical staff does to the music.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A073. doi:10.1115/DETC2014-34362.

The accurate reconstruction of a human digital dental model represents a wide research area within the orthodontic field due to its importance for the customization of patient treatments. Usually, 3-D dental root geometries are obtained by segmenting tomographic data. However, concerns about radiation doses may be raised since tomographic scans produce a greater X-ray dose than conventional 2-D panoramic radiographs (PAN). The present work is aimed at investigating the possibility to retrieve 3-D shape of individual teeth by exposing the patient to the minimum radiation dose. The proposed methodology is based on adapting general CAD templates over patient-specific dental anatomy, which is reconstructed by integrating the optical digitization of dental plaster models with a PAN image. The radiographic capturing process is simulated through the Discrete Radon Transform (DRT) and performed onto the patient crowns geometry obtained by segmenting the digital plaster model. A synthetic PAN image is then reconstructed and used to integrate the radiographic data within the digitized plaster model, thus allowing to retrieve roots information which guide the CAD templates adapting over the patient anatomy.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A074. doi:10.1115/DETC2014-34381.

This paper refers to the integration of simulations tools to assess the design of prosthetic devices. We address issues arising when the prosthesis needs to be virtually tested, i.e., the gait of the virtual patient wearing the prosthesis. Therefore, we integrate two different simulation tools: the first one to study the interaction between socket and residual limb during the gait and the second one to analyze the patient’s gait deviations. Combining these numerical analyses, it is possible to investigate the causes of gait deviations and suggest remedies, both related to the prosthesis setup and the socket modeling. To prove the validity of the approach, we implemented a Finite Element Analysis model to analysis the stump-socket contact and we assembled a low cost Motion Capture system to acquire and elaborate patient gait. Preliminary results and remarks conclude the paper.

Topics: Modeling , Prostheses
Commentary by Dr. Valentin Fuster
2014;():V01AT02A075. doi:10.1115/DETC2014-34401.

Human running is simulated in this work by using a skeletal digital human model with 55 degrees of freedom (DOFs). A predictive dynamics method is used to formulate the running problem, and normal running is formulated as a symmetric and cyclic motion. The dynamic effort and impulse are used as the performance measure, and the upper body yawing moment is also included in the performance measure. The joint angle profiles and joint torque profiles are calculated for the full-body human model, and the ground reaction force is determined. The effect of foot location on the running motion prediction are simulated and studied.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A076. doi:10.1115/DETC2014-34575.

Helping resident surgeons quickly and accurately develop expertise in clinical skills is crucial for improving patient safety and care. Because most surgical skills require visually aided device manipulations, developing effective eye-hand coordination is a crucial component of most surgical training. While eye-hand coordination has typically been evaluated on the basis of time to complete a task and number of errors, growing evidence suggests that task performance can be distinguished by detecting eye gaze patterns and movement planning. However, few studies have explored methods for collecting and evaluating gaze patterns without significantly impeding the user (e.g. goggle eye trackers), reducing the utility of this approach. Therefore, the current study was developed to propose and test a framework for evaluating the quality of eye-hand coordination using a novel motion analysis technique. To validate the framework, three expert and three novice resident surgeons were video-taped during ultrasound-guided central-venous catheter insertion procedures and compared. Our method was able to show that experts demonstrate distinguished patterns in adjusted accuracy, movement trajectories and time allocation. The results also showed that expert performance in eye-hand coordination appears to be characterized by goal-oriented adjustment. This research framework can be used to characterize individual differences and improve surgical residence training and can also be applied in other domains where eye-hand coordination needs to be studied without impeding user performance.

Topics: Ultrasound , Catheters
Commentary by Dr. Valentin Fuster
2014;():V01AT02A077. doi:10.1115/DETC2014-34660.

Carbon dioxide (CO2) and humidity are two factors that affect respirator comfort. Whenever one uses a respirator, CO2 is reinhaled from the previous exhalation and the humidity inside the respirator cavity increases. The CO2 reinhalation causes respirator discomfort with symptoms like headache, dizziness, and etc. The increased humidity causes respirator thermal discomfort. Experimental researches focused on measuring the CO2 and humidity values in the respirator cavity during a long period of time (over 1 hour). However, these experiments ignored CO2 and humidity value variation during a breathing cycle within the respirator cavity. The objective of this study was to use computational fluid dynamics (CFD) method to calculate the CO2 and humidity values inside the respirator cavity during four breathing cycles (19.2s). In our previous work the contact between a headform and a filtering facepiece respirator (FFR) was simulated by the using finite element modeling. In this work a meshed domain was generated including the FFR cavity, the FFR and the region outside of the FFR. A breathing cycle, having both exhalation and inhalation, was then defined as a time-dependent flow rate through a breathing opening (nasal breathing, mouth breathing, and nasal-mouth breathing). Using CFD method, the breathing air flow and the species transport of CO2 and water vapor (H2O) in the domain were simulated for 4 breathing cycles. Totally 5 tests with different breathing openings and different breathing flow rates were conducted: nasal breathing with base, 2 and 3 times flow rate, mouth breathing with base flow rate, and nasal-mouth breathing with base flow rate. The simulation results showed that there were large CO2 and H2O value variations (CO2 mass fraction from 0 to 0.074 and H2O mass fraction 0.0077 and 0.0151) in the FFR cavity during a breathing cycle. The inhaled CO2 mole fraction decreased with increasing breathing flow rate. With the base flow rate, during inhalation the middle point between the nostrils and mouth had higher relative humidity than other probing positions did.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A078. doi:10.1115/DETC2014-34671.

In this paper, we present a framework to build hybrid cells that support safe and efficient human-robot collaboration during assembly operations. Our approach considers a representative one-robot one-human model in which a human and a robot asynchronously work toward assembling a product. Whereas the human retrieves parts from a bin and brings them into the robot workspace, the robot picks up parts from its workspace and assembles them into the product. Using this collaboration model, we explicate the design details of the overall framework comprising three modules — plan generation, system state monitoring, and contingency handling. We provide details of the virtual cell and the physical cell used to implement our framework. Finally, we report results from human-robot collaboration experiments to illustrate our approach.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A079. doi:10.1115/DETC2014-34770.

Determining participant engagement is an important issue across a large number of fields, ranging from entertainment to education. Traditionally, feedback from participants is taken after the activity has been completed. Alternately, continuous observation by trained humans is needed. Thus, there is a need for an automated real time solution. In this paper, the authors propose a data mining driven approach that models a participant’s engagement, based on body language data acquired in real time using non-invasive sensors. Skeletal position data, that approximates human body motions, is acquired from participants using off the shelf, non-invasive sensors. Thereafter, machine learning techniques are employed to detect body language patterns representing emotions such as delight, interest, boredom, frustration, and confusion. The methodology proposed in this paper enables researchers to predict the participants’ engagement levels in real time with high accuracy above 98%. A case study involving human participants enacting eight body language poses, is used to illustrate the effectiveness of the methodology. Finally, this methodology highlights the potential of a real time, automated engagement detection using non-invasive sensors which can ultimately have applications in a large variety of areas such as lectures, gaming and classroom learning.

Topics: Sensors
Commentary by Dr. Valentin Fuster
2014;():V01AT02A080. doi:10.1115/DETC2014-34789.

Patient-specific computational study of aortic disease provides a powerful means for diagnosis and pre-operative planning. However, creating patient-specific computational models can be time-consuming due to the fact that anatomical geometries extracted from clinical imaging data are often incomplete and noisy. This paper presents an approach for constructing statistical shape models (SSMs) for aortic surfaces with the eventual goal of mapping the mean aorta geometries to raw surface data obtained from the clinical images for each new patient so that patient-specific models can be automatically constructed.

The input aortic models in this study come in the form of triangle meshes generated from CT scans on 6 patients. Statistical models with modes that characterize the variation pattern are found after optimizing the group-wise correspondence across the aorta training set. We use the direct reparametrization approach to efficiently manipulate shape correspondence. We use B-spline based differentiable shape representation for the training set and use the adjoint method for deriving analytical gradients in a gradient based approach for manipulating the shape correspondence to minimize the description length of the resulting SSM. Our numerical result shows that the evaluation measures of the optimized statistical model have been significantly enhanced.

Topics: Modeling , Diseases , Shapes
Commentary by Dr. Valentin Fuster
2014;():V01AT02A081. doi:10.1115/DETC2014-34912.

In the last decades, research in the orthodontic field has focused on the development of more comfortable and aesthetic appliances such as thermoformed aligners. Aligners have been used in orthodontics since the mid 20-century. Nonetheless, there is still not enough knowledge about how they interact with teeth. This paper is focused on the development of a Finite Element Method (FEM) model to be used in the optimization process of geometrical attributes of removable aligners. The presented method integrates Cone Beam Computed Tomography (CBCT) data and optical data in order to obtain a customized model of the dental structures, which include both crown and root shapes. The digital simulation has been focused on analyzing the behavior of three upper frontal teeth. Moreover, the analyses have been carried out by using different aligners’ thicknesses with the support of composite structures polymerized on teeth surfaces while simulating a 2 degrees rotation of an upper central incisor.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: CAPPD: Emotional Engineering

2014;():V01AT02A082. doi:10.1115/DETC2014-34195.

For products that can improve the appearance of the user, such as facial accessories, both the characteristics of the product user and design features must be considered in design evaluation. This paper proposes an experimental evaluation scheme that investigates the interactions between the design features of 3D eyeglasses frames and user facial characteristics. Face models of users containing both geometric and image data were constructed using 3D scanning. A face deformation method was developed to manipulate individual facial features without changing the other features on the face models. In the evaluation scheme, participants judged synthetic faces, which had varied eye distances and orientations and were wearing factorized eyeglasses frames, according to three affective measures related to the personality attributes of confidence, friendliness, and attractiveness. The experimental results show that changing certain design features influences the impressions of the face models with varied facial characteristics. The proposed scheme facilitates designing products that strengthen the impression of specific personality traits by accommodating individual differences in facial features.

Topics: Product design
Commentary by Dr. Valentin Fuster
2014;():V01AT02A083. doi:10.1115/DETC2014-34335.

The power spectrum of human heart rate measured over 24 h exhibits “power-law” 1/f alpha-type spectral behavior with alpha approximately 1. This may be one of the reasons why 1/f noise help people make relaxed, or feel comfortable. As a result, people feel relaxed by looking at a candle light, listening to the sound of ocean wave, or feeling the breezing wind because all of these natural phenomenon is based on the 1/f noise fluctuation. Considering this feature, a technical approach of 1/f noise fluctuation has been applied to various industry products, ranging from light illumination, fan control, temperature control, etc. to implement cozy products. For example, one typical example would be a lighting product mimicking a candle light, which illuminate just like a true candle. A candle light provides a cozy atmosphere to relax our mind by forgetting severe business issues, which is because of the nature of 1/f noise fluctuation as mentioned above. However, it is not suitable to do something important by concentration under a candle light because of the changes of brightness and the blinking nature of candles. This study designs and develops 1/f noise-fluctuated cozy lighting system with stable brightness and chromaticity without blinking so that people unconsciously feel relaxed under this lighting system without noticing the 1/f noise fluctuation and concentrate on work or operation. In order to implement this lighting system, combination of two types of white LED lights were used. White LED lights are manufactured by the combination of different colors having different spectrum. For example, Blue with YAG fluorophore, Blue with RG fluorophore, UV with RGB fluorophore, etc. provide all white LED lights. This means that it is possible to make two different types of LED lights which have the same white color with different combination of spectrums. If the two white colors of the two LED are the same, nobody cannot notice when the two LEDs are switching over each other, periodically or randomly. People only think that the white color is constantly provided by the white LED light. However, if the switching is based on 1/f noise fluctuation, some positive effect can be expected under this lighting system. This paper shows the overview of the idea of 1/f noise-fluctuated cozy lighting system, and then presents the two basic challenges of the idea towards concentration improvement; Combination of two types of white LED and 1/f noise-fluctuated switching system. These two challenges are presented using a prototype lighting systems developed in this study.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A084. doi:10.1115/DETC2014-34347.

The objective of the paper is to present a case study to exploit interactive Virtual Prototypes (iVPs) for investigating the way humans experience products. This method can be used for “prototyping” new product experiences, for monitoring users’ emotional reactions during the interaction and finally, for practically redesigning these experiences on the basis of the users’ feedback. Products considered here are domestic appliances, where the experience consists of the interaction with their physical interfaces.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A085. doi:10.1115/DETC2014-34364.

Although break-in is important in products, only the aspect of run-in is considered in mechanical products. But we feel happy when our shoes come to fit us. We feel happy, too, if our products come to fit us in our operating conditions and work very well as we expect and meet not only our needs but also our preferences.

Further, if our products come to fit us very well, we feel attached to them and will use them longer. Such emotional break-in has seldom been considered in mechanical design.

If a product or a machine degrades and does not satisfy its design requirements, it will be restored to its original design level. This is considered to be the most important task of maintenance.

But break-in is associated with the phenomenon of degradation. If we can manage our degradation more intelligently, we would feel happier because we feel the product or the machine is breaking in to our needs and to our preferences.

This is a position paper to discuss this issue.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A086. doi:10.1115/DETC2014-34458.

In the user’s perception of a product’s qualities, the state of their sensory modality may shift from one state to another. For example, users see and then touch a product to perceive its texture. Between such state transitions, users have expectations regarding their subsequent states based on their experience of a current state event. Expectation effect is a psychological effect in which prior expectation changes posterior perception itself. The effect is a key factor to design user’s emotions induced by expectation disconfirmation as well as designing a perceived quality based on prior expectations. Although experimental findings on the expectation effect exist in a variety of research disciplines, general and theoretical models of the effect have been largely neglected. The present authors previously found out the visual expectation effect on tactile perceptions of surface texture. The causes of the expectation effect, however, remain largely unexplored. To intentionally design the expectation effect, general and theoretical models that estimates conditions of the effect is needed. In this paper, we propose a theoretical model of the expectation effect using information theory and an affective expectation model (AEM). We hypothesize that Shannon’s entropy of the prior subjective probability distributions of posterior experience determines the occurrence of the expectation effect and that the amount of information gained after experiencing a posterior event is positively correlated with the intensity of the expectation effect. We further hypothesize that a conscious level of expectation discrepancy distinguishes between two types of expectation effect, namely, assimilation and contrast. To verify these hypotheses, we conducted an experiment in which participants responded to the tactile qualities of surface texture. In the experiment, we extracted the visual expectation effect on tactile roughness during a sensory modality transition from vision to touch and analyzed the causes of the effect based on our hypotheses. The experimental results indicated the appropriateness of the proposed model of the expectation effect.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A087. doi:10.1115/DETC2014-34735.

A consumer’s emotional response to a product is influenced by cognitive processes, such as memories associated with use of the product and expectations of its performance. Here, we propose a cognitive neural model of Expectology, called PEAM (Prediction - Experience - Appraisal - Memory), as a novel tool that considers consumers’ emotional responses in order to aid in product design. The PEAM model divides cognitive processes associated with product use into 4 phases: prediction, experience, appraisal, and memory. We examined the spatiotemporal changes in brain activity associated with product evaluation and memory during the prediction phase, by obtaining electroencephalograms (EEGs). EEGs of 10 healthy participants with normal or corrected-to-normal vision were recorded while they viewed images of products as well as when they provided a preference rating for each product. Our results revealed significantly increased neural activity in the gamma frequency in the temporal areas, the brain regions where declarative memory is stored, and in the prefrontal area for products that were rated as preferable. Our data suggest that memory is used for product evaluation in the prediction phase. These findings also suggest that activity in these specific brain areas are reliable predictors for product evaluation.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A088. doi:10.1115/DETC2014-34813.

It has been difficult to make product differentiations from standpoints on function and quality because each manufacturer has had the same level of manufacturing technologies. In order to provide attractive products for customers, it is essential to product emotional products satisfying a specified target level of KANSEI and preference of restricted customers. Taguchi method is effective for robust design of product functions but it is uncertain for design of product forms. In this paper, Taguchi method is applied for robust form design satisfying a specified KANSEI and preference of restricted customers by proposing and introducing the original S/N ratio. The effectiveness of the proposed form design method is verified by experiments based on questionnaires.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A089. doi:10.1115/DETC2014-35430.

Air-conditioning equipment is used in various places such as houses, office buildings, and public facilities and is indispensable in modern-day life. Therefore, the energy consumption of air-conditioning equipment accounts for a large percentage of the total energy consumption in the average household. Specifically, it accounts for 26% of the annual energy consumption in ordinary homes and 27% in industry, according to the Annual Energy Report for Japan, which was presented by the Ministry of the Economy, Trade, and Industry, and the Agency for Natural Resources and Energy in 2010. Therefore, it is desirable to reduce energy consumption by reducing the air-conditioning load. The Ministry of the Environment recommends a constant preset temperature of 28°C in summer to decrease energy consumption. However, many people feel uncomfortable in such a thermal environment. Thus, an air-conditioning control to simultaneously suppress energy consumption and maintain human thermal comfort is desired. To develop such a control, an index to accurately evaluate human thermal comfort is needed. When a person feels comfortable or uncomfortable, their prefrontal area, which is involved in thinking and the feeling of emotions, is activated. It is presumed that the measurement of the brain activation reaction of a person will reveal whether the person feels comfortable or uncomfortable in the thermal environment. The evaluation of thermal comfort by means of brain activation reactions will allow one to develop the optimum air-conditioning control to maintain human thermal comfort. This paper proposes a method to evaluate thermal comfort via brain signals and ultimately aims to develop an air-conditioning control system utilizing this evaluation method. This paper will describe the measurement procedure of brain activation reactions to indoor-temperature change by using near-infrared spectroscopy and the relationship between thermal comfort and brain activation reaction. This study also investigated the changes in oxyHb levels together with indoor-temperature changes, measured with the NIRS. We measured the changes in the oxyHb levels of the prefrontal area when the temperature increased and decreased. As a result, the oxyHb level in the prefrontal area correlated with the indoor-temperature change, the PMV, and the subjects’ declaration of thermal sensation. Conversely, the change in the oxyHb level with the inclusion of wind and a constant indoor temperature significantly differed with that with a varying indoor temperature. Furthermore, the oxyHb change correlated with the PMV and the subject’s declaration of thermal sensation. Therefore, the measured oxyHb change may represent the thermal comfort of a person.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A090. doi:10.1115/DETC2014-35516.

Teledermatology is an application of telemedicine i.e. clinical healthcare using IT-technologies, for skin disorders. This innovative system is consider as a potential answer to the decrease number of physicians, their unequal density, and to shorten the lead time and the time to schedule a visit. Despite its indisputable medical interest and a mature technology, teledermatology struggles to demonstrate the benefit of its large-scale deployment to decision makers. Considering Teledermatology as a technological or organizational innovation, to improve the conventional care process delivery and not as an independent healthcare delivery may be a key issue. Using conventional care activity modeling, we will in this paper take the example of skin cancer management to generate the possible observed scenarios and their possible occurrence in >65 year old population. Then using time and cost as process indicators, we will measure and evaluate the impact of a teledermatology process on the conventional care process. Numerical data will be gathered by two experimental studies.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: CAPPD: Modeling Tools and Metrics for Sustainable Manufacturing

2014;():V01AT02A091. doi:10.1115/DETC2014-34280.

A sustainable solution should holistically optimize all objectives related to the environment and a product’s cost and performance. As such, it should explicitly address material selection, which significantly affects environmental impacts and other objectives of a product design. While Life Cycle Assessment (LCA) provides credible methods to account for environmental impacts, current methods are not efficient enough for use at the early design stages to prune the entire design space without requiring execution of costly LCA analysis for each design scenario. Alternatively, surrogate modeling approaches can facilitate efficient concept selection during early design stages. However, material properties consist of discrete data sets, thus posing a significant challenge in the construction of surrogate models for numerical optimization.

In this work, we address the unique challenges of material selection in sustainable product design in some important ways. Salient features of the robust surrogate modeling approach include achieving manageable dimensionality of LCA with a minimal loss of the important information by the consolidation of significant factors into categorized groups, as well as subsequent efficiency enhancement by a streamlined process that avoids the construction of full LCA. This novel approach combines efficiency of use with a mathematically rigorous representation of any pertinent objectives across an entire design space. To this end, we introduce an adapted two stage sampling approach in surrogate model construction based on a feasible approximation of a Latin Hypercube design at the first stage. The development and implementation of the method are illustrated with the aid of an automotive disc brake design, and the results are discussed in the context of robust optimal material selection in early sustainable product design.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A092. doi:10.1115/DETC2014-34793.

Disassembly, as one of the core steps in the End of Life (EOL) activities, has been a popular topic of research in both industrial and academic areas. It not only reduces product lifecycle cost, but also substantially influences environmental impact. Although different methods have been proposed for tackling different aspects of the disassembly planning problems, certain gaps still exist. For example, in the case of the disassembly sequencing, traditional methods focus mainly on the geometry and topology constraints, but omit the important technical constraints like force (gravity), connector type, etc.; it makes these methods less efficient and realistic. Also, the determination of an optimal disassembly sequence requires an extensive exchange and sharing of the disassembly related knowledge among the different stakeholders like manufacturers, product designers, maintenance staffs and material engineers. A mechanism to support such information interoperability is important in the disassembly process. In order to address those research issues, this paper proposes a Semantic Web based Disassembly Planning Framework. In the framework, the proposed “Disassembly Core Ontology” (DCO) serves as a formal, explicit information core for different users like product designer and disassembler. By exploiting the rich semantic knowledge (like gravity, connector type, etc.) that has been explicitly embedded in the proposed DCO, it has been demonstrated that the semantic web approach has potentials to address both efficiency- and interoperability-related issues in disassembly planning problems.

Commentary by Dr. Valentin Fuster

34th Computers and Information in Engineering Conference: CAPPD: Multimodal INTerfaces for Engineering Design

2014;():V01AT02A093. doi:10.1115/DETC2014-34068.

Simple line drawings and 2D sketches are commonly used by humans to convey their ideas about a particular shape or shapes in an image. These approximations of shapes are effective means for visual communication and artistic practices. The idea of shape abstraction can be derived from such approximations of shapes, which considers their most important and salient features. The key idea behind shape abstraction is to extract a simplified version of a shape that preserves the salient characteristics of the input shape. In this paper, we introduce and analyze a slightly different and novel facet of abstraction, which we call “partial to full shape recognition” of two dimensional shapes (line drawing and sketches). The key idea is recognizing partial 2D shapes that leads to recognition of full shape utilizing the theory of recognition-by-components (RBC) and geons (human shape perception). We segment the 2D shapes according to the non-accidental relations provided by RBC and analyze the electroencephalogram (EEG) brain activity of subjects using a brain computer interface (BCI) to gain knowledge of human understanding of such relations pertaining to specific partial to full shape correspondence.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2014;():V01AT02A094. doi:10.1115/DETC2014-34070.

A key objective of gesture based computer aided design (CAD) interface is to enable humans to manipulate 3D models in virtual environments in a manner similar to how such objects are manipulated in real-life. In this paper, we outline the development of a novel real-time gesture based conceptual computer aided design tool which enables intuitive hand gesture based interaction with a given design interface. Recognized hand gestures along with hand position information are converted into commands for rotating, scaling, and translating 3D models. In the presented system, gestures are identified based solely on the depth information obtained via inexpensive depth sensing cameras (SoftKinetics DepthSense 311). Since the gesture recognition system is entirely based on using depth images, the developed system is robust and insensitive to variations in lighting conditions, hand color, and background noise. The difference between the input hand shape and the nearest neighboring point in the database is employed as the criterion to recognize different gestures. Extensive experiments with a design interface are also presented to demonstrate the accuracy, robustness, and effectiveness of the presented system.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A095. doi:10.1115/DETC2014-34449.

In this paper, 3D face recognition under isometric deformation (induced by facial expressions) is considered. The main objective is to employ the shape descriptors that are invariant to (isometric) deformations to provide an efficient face recognition algorithm. Two methods of the correspondence are utilized for automatic landmark assignment to the query face. One is based on the conventional iterative closest point (ICP) method and another is based upon the geometrical/topological features of the human face. The shape descriptor is chosen to be the well-known geodesic distance (GD) measure. The recognition task is performed on SHREC08 database for both correspondence methods and the effect of feature (GD) vector size as well as landmark positions on the recognition accuracy were argued.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A096. doi:10.1115/DETC2014-34816.

This paper investigates the proper synchronization of sketch data and cognitive states in a multi-modal CAD interface. In a series of experiments, 5 subjects were instructed to watch and then explain 6 mechanical mechanisms by sketching them on a touch based screen. Simultaneously, subject’s brain waves were recorded in terms of electroencephalogram (EEG) signals from 9 locations on the scalp. EEG signals were analyzed and translated into mental workload and cognitive state. A dynamic time window was then constructed to align these features with sketch features such that the combination of two modalities maximizes the classification of gesture from non-gesture strokes. Quadratic Discriminant Analysis (QDA) was used as classification method. Our experimental results show that the best temporal alignment for workload and sketch analysis starts from 30% time lag with previous stroke and ends before 30% time lag with next stroke.

Commentary by Dr. Valentin Fuster
2014;():V01AT02A097. doi:10.1115/DETC2014-35442.

In the real world, we use our innate manual dexterity to create and manipulate 3D objects. Conventional virtual design tools largely neglect this skill by imposing non-intuitive 2D control mechanisms for interacting with 3D design models. Their usage is thus cumbersome, time consuming and requires training. We propose a novel design paradigm that combines users’ manual dexterity with the physical affordances of non-instrumented and ordinary objects to support virtual 3D design constructions. We demonstrate this paradigm through Proto-TAI, a quick prototyping application where 2D shapes are assembled into 3D representations of ideated design concepts. Here, users can create 2D shapes in a pen-based sketch medium and use expressive handheld movements of a planar proxy to configure the shapes in 3D space. The proxy provides a metaphorical means for possessing and controlling the shapes. Here, a depth sensor and computer vision algorithms track the proxy’s spatial movement. The 3D design prototype constructed in our system can be fabricated using a laser cutter and physically assembled on-the-fly. Our system has vast implications in many design and assembly contexts, and we demonstrate its usability and efficacy through user studies and evaluations.

Topics: Design
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In