0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2012;():i. doi:10.1115/DETC2012-NS2.
FREE TO VIEW

This online compilation of papers from the ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2012) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: 3D Interaction Techniques

2012;():3-12. doi:10.1115/DETC2012-70822.

Since 2010 when the Microsoft Kinect with its integrated depth-sensing camera appeared on the market, completely new kinds of interaction techniques have been integrated into console games. They don’t require any instrumentalization and no complicated calibration or time-consuming setup anymore. But even having these benefits, some drawbacks exist. Most games only enable the user to fulfill very simple gestures like waving, jumping or stooping, which is not the natural behavior of a user. In addition the depth-sensing technology lacks of haptic feedback. Of course we cannot solve the lack of haptic feedback, but we want to improve the whole-body interaction. Our goal is to develop 3D interaction techniques that give a maximum of freedom to the user and enable her to perform precise and immersive interactions.

This work focuses on whole-body interaction in immersive virtual environments. We present 3D interaction techniques that provide the user with a maximum of freedom and enables her to operate precisely and immersive in virtual environments. Furthermore we present a user study, in which we analyzed how Navigation and Manipulation techniques can be performed by users’ body-interaction using a depth-sensing camera and a huge projection screen. Therefore three alternative approaches have been developed and tested: classical gamepad interaction, an indirect pointer-based interaction and a more direct whole-body interaction technique. We compared their effectiveness and preciseness. It turned out that users act faster, while using the gamepad, but generate significantly more errors at the same time. Using depth-sensing based whole-body interaction techniques it became apparent, that the interaction is much more immersive, natural and intuitive, even if slower. We will show the advantages of our approach and how it can be used in various domains, more effectively and efficiently for their users.

Topics: Navigation
Commentary by Dr. Valentin Fuster
2012;():13-17. doi:10.1115/DETC2012-70891.

This paper presents a haptic device with 3D computer graphics as part of a high fidelity medical epidural simulator development program. The haptic device is used as an input to move the needle in 3D, and also to generate force feedback to the user during insertion. A needle insertion trial was conducted on a porcine cadaver to obtain force data. The data generated from this trial was used to recreate the feeling of epidural insertion in the simulator. The interaction forces have been approximated to the resultant force obtained during the trial representing the force generated by the haptic device. The haptic device is interfaced with the 3D graphics for visualization. As the haptic stylus is moved, the needle moves on the screen and the depth of the needle tip indicates which tissue layer is being penetrated. Different forces are generated by the haptic device for each tissue layer as the epidural needle is inserted. As the needle enters the epidural space, the force drops to indicate loss of resistance.

Topics: Simulation , Haptics
Commentary by Dr. Valentin Fuster
2012;():19-28. doi:10.1115/DETC2012-71427.

We present the paradigm of natural and exploratory shape modeling by introducing novel 3D interactions for creating, modifying and manipulating 3D shapes using arms and hands. Though current design tools provide complex modeling functionalities, they remain non-intuitive and require significant training since they segregate 3D shapes into hierarchical 2D inputs, thus binding the user to stringent procedural steps and making modifications cumbersome. In addition the designer knows what to design when they go to CAD systems and the creative exploration in design is lost. We present a shape creation paradigm as an exploration of creative imagination and externalization of shapes, particularly in the early phases of design. We integrate the capability of humans to express 3D shapes via hand-arm motions with traditional sweep surface representation to demonstrate rapid exploration of a rich variety of fairly complex 3D shapes. We track the skeleton of users using the depth data provided by low-cost depth sensing camera (Kinect™). Our modeling tool is configurable to provide a variety of implicit constraints for shape symmetry and resolution based on the position, orientation and speed of the arms. Intuitive strategies for coarse and fine shape modifications are also proposed. We conclusively demonstrate the creation of a wide variety of product concepts and show an average modeling time of a only few seconds while retaining the intuitiveness of communicating the design intent.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2012;():29-37. doi:10.1115/DETC2012-71543.

This paper discusses development of a new bimanual interface configuration for virtual assembly consisting of a haptic device at one hand and a 6DOF tracking device at the other hand. The two devices form a multimodal interaction configuration facilitating unique interactions for virtual assembly. Tasks for virtual assembly can consist of both “one hand one object” and “bimanual single object” interactions. For one hand one object interactions this device configuration offers advantages in terms of increased manipulation workspace and provides a tradeoff between the cost effectiveness and mode of feedback. For bimanual single object manipulation an interaction method developed using this device configuration improves the realism and facilitates variation in precision of task of bimanual single object orientation. Furthermore another interaction method to expand the haptic device workspace using this configuration is introduced. The applicability of both these methods to the field of virtual assembly is discussed.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Advanced Modeling and Simulation, General

2012;():39-46. doi:10.1115/DETC2012-70009.

In this study, we focused on the implementation of numerical methods for a 2-fluid system including the surface tension effect in the momentum equations. This model consists of a complete set of 8 equations including 2-mass, 4-momentum, and 2-internal energy conservations having all real eigenvalues. Based on this equation system with upwind numerical method, we first make a pilot 2-dimensional computer code and then solve some benchmark problems in order to check whether this model and numerical method is able to properly analyze some fundamental two-phase flow systems or not.

Commentary by Dr. Valentin Fuster
2012;():47-52. doi:10.1115/DETC2012-70042.

In this study; a Sugeno type ANFI model which describes the relationship between the bio surfactant concentration as a model output and the critical medium components as its inputs has been constructed. The critical medium components are glucose, urea, SrCl2 and MgSo4. The experimental data for training and testing capability of the model obtained by a statistical experimental design which have been captured from literatures. Six generalized bell shaped membership function have been selected for each of input variables and based on the training data ANFI model has been trained using the hybrid learning algorithm. The yielded biosurfactant concentration values from the model prediction shows close agreement with the experimental data.

Commentary by Dr. Valentin Fuster
2012;():53-65. doi:10.1115/DETC2012-70224.

The purpose of this investigation is to demonstrate the use of the finite element (FE) absolute nodal coordinate formulation (ANCF) and multibody system (MBS) algorithms in modeling both the contact geometry and ligaments deformations in biomechanics applications. Two ANCF approaches can be used to model the rigid contact surface geometry. In the first approach, fully parameterized ANCF volume elements are converted to surface geometry using parametric relationship that reduces the number of independent coordinate lines. This parametric relationship can be defined analytically or using a spline function representation. In the second approach, an ANCF surface that defines a gradient deficient thin plate element is used. This second approach does not require the use of parametric relations or spline function representations. These two geometric approaches shed light on the generality of and the flexibility offered by the ANCF geometry as compared to computational geometry (CG) methods such as B-splines and NURBS (Non-Uniform Rational B-Splines). Furthermore, because B-spline and NURBS representations employ a rigid recurrence structure, they are not suited as general analysis tools that capture different types of joint discontinuities. ANCF finite elements, on the other hand, lend themselves easily to geometric description and can additionally be used effectively in the analysis of ligaments, muscles, and soft tissues (LMST), as demonstrated in this paper using the knee joint as an example. In this study, ANCF finite elements are used to define the femur/tibia rigid body contact surface geometry. The same ANCF finite elements are also used to model the MCL and LCL ligament deformations. Two different contact formulations are used in this investigation to predict the femur/tibia contact forces; the elastic contact formulation where penetrations and separations at the contact points are allowed, and the constraint contact formulation where the non-conformal contact conditions are imposed as constraint equations, and as a consequence, no separations or penetrations at the contact points are allowed. For both formulations, the contact surfaces are described in a parametric form using surface parameters that enter into the ANCF finite element geometric description. A set of nonlinear algebraic equations that depend on the surface parameters is developed and used to determine the location of the contact points. These two contact formulations are implemented in a general MBS algorithm that allows for modeling rigid and flexible body dynamics.

Commentary by Dr. Valentin Fuster
2012;():67-73. doi:10.1115/DETC2012-70312.

New fabrication methods for topologically complex monolithic ceramic components with accurate dimensions are being investigated. A common problem in the fabrication of precision ceramic components is controlling the forming process to attain uniform density in the green body; otherwise the tolerances achieved with green ceramics do not carry over to acceptable tolerances on the finished ceramic due to distortion and warping that occur during sintering. One of the fabrication methods under study is the fugitive phase approach in which a sacrificial material is used to form the desired channels and cavities. This paper is a continuation of previously presented work and examines the lamination step of the fugitive phase approach. In the lamination step, the green, pre-sintered, ceramic parts are layered with the sacrificial material parts and pressed together to remove air voids. During pressing uneven pressure distributions can be created in the green ceramic and the fugitive phase parts are slightly displaced or rotated. A computational model of the lamination process is used to examine how the material plasticity of the green ceramic, computational boundary conditions, and pressing duration affect the resulting geometry produced at the end of the lamination step prior to sintering. The resting stress, plastic strain, and deformed shapes are examined and compared. This information is used to complement experimental investigations of the fugitive phase approach.

Topics: Ceramics , Modeling
Commentary by Dr. Valentin Fuster
2012;():75-80. doi:10.1115/DETC2012-70337.

Pyramidal three-roll bending is widely used in manufacture due to its simple configuration and advantage for thick plate roll bending. However, there remain two planar zones near the front and rear ends of the bent shape. A mecano-welding process which provides improved circularity of the bent shape is proposed in this paper. This process includes three sub-processes: the first sub-process is the roll bending from a plate with cylindrical rolls; the second sub-process is the gas metal arc-welding process used to join the gap of the bent tubular section; the third sub-process is the rerun roll bending of the welded shape. Results of the numerical simulation of the first two sub-processes under the well-known ANSYS and ANSYS/LS-DYNA environment are reported. The bent shape after the first roll bending, the distributions of the temperature and residual stress after the welding are illustrated and discussed.

Commentary by Dr. Valentin Fuster
2012;():81-87. doi:10.1115/DETC2012-70341.

Simple data fitting schemes involve tradeoff between the number of terms in the representation and the accuracy of the fit due to finite number of digits used to represent real numbers. This is also true for functional representation of data, where a known family of functions must be chosen to represent the data. Often these functions must also establish higher order continuity in the data. In practice, polynomial representation of data is very popular, either directly or indirectly through parameterized description such as NURBS. In this case the function approximation requires determining the constants that multiply the polynomial terms. Bezier functions, a special B-spline, can represent the data and the derivatives with fidelity. These functions are established by minimizing the squared error between the data and the representation. The Bezier representation can be determined numerically through simple matrix operations. The accuracy increases with the order of the polynomials until round-off errors becomes a factor. A completely numerical approach will compute faster but will limit the order of the polynomial due to round-off error. Symbolic processing on the other hand extends precision with more terms in representation, but is difficult to determine explicitly. Current symbolic software systems allow translation of the symbolic representation into efficient numerical representation for the application of optimization. This combination improves accuracy of the representation by postponing round off errors. This is demonstrated by application to two examples, one two-dimensional and the second, three-dimensional.

Commentary by Dr. Valentin Fuster
2012;():89-95. doi:10.1115/DETC2012-70466.

In finite element analysis (FEA), tasks such as mesh optimization and mesh morphing can lead to overlapping elements, i.e., to a tangled mesh. Such meshes are considered ‘unacceptable’ today, and are therefore untangled using specialized procedures.

Here it is shown that FEA can be easily extended to handle tangled meshes. Specifically, by defining the nodal functional space as an oriented linear combination of the element shape functions, it is shown that the classic Galerkin formulation leads to a valid finite element formulation over such meshes.

Patch tests and numerical examples illustrate the correctness of the proposed methodology.

Commentary by Dr. Valentin Fuster
2012;():97-109. doi:10.1115/DETC2012-70652.

Being able to quickly model and simulate very early design solutions in the design process is an important practical issue for engineering designers. Early design is characterized by the small amount of quantitative data available at the beginning of the development process. The task is becoming cumbersome for engineers when in addition they do not possess extensive knowledge of the domain of interest. In this context, traditional modeling and simulation methods are disqualified for supporting the early engineering design choices because they require too much details and precise quantitative information. The approach considered in this article to supply the deficiency of traditional modeling methods is combining three domains of physics and mathematics: qualitative physics, dimensional analysis and graph-based representation. The present article develops the general framework which is emerging from this combination. The authors develop the framework to the fast modeling and simulation of an air bearing. The structure of the article is the following, first the basis of the modeling and simulation method are briefly presented. In a second step the entire approach is developed on the case of an air bearing concept with the goal of making the presentation as pedagogical as possible. A causal ordering heuristic is used and combined with the topology of the concept to test. This is gradually leading to a causal graph which is transformed into a flow graph that can be simulated in system dynamics simulation tools. A new and easy approach to discover the laws governing the system dynamic model is also explained. Finally the model is simulated and analyzed.

As a result, the method presented in this article offers several advantages: 1- it can be supported by a dedicated computer aided approach, 2- it brings simulation capabilities at very early design stage level where it is seldom present.

Topics: Simulation , Design , Modeling
Commentary by Dr. Valentin Fuster
2012;():111-123. doi:10.1115/DETC2012-70748.

Bonded multi-material assemblies arise frequently in design, manufacturing, architecture, and materials design. It is a common wisdom that finite element analysis of such assemblies usually requires all components to be represented by compatible finite element meshes; application of meshfree methods in such situations is often considered problematic due to the need to impose additional interface conditions. Neither approach scales to deal with realistically complex models arising in many applications.

We propose a simple extension of meshfree analysis on a non-conforming mesh for linear structural analysis of such multimaterial assemblies. The method is simple, can be implemented within most FEA packages and does not require either compatible meshing or complex interface boundary conditions. Our numerical experiments demonstrate that computed results are in good agreement with known analytical and computational results for well studied multi-material bonded assemblies (lap and butt joints). We also demonstrate application of the proposed method to realistically complex assembly of a mounted sculpture that cannot be easily analysed by other methods.

Commentary by Dr. Valentin Fuster
2012;():125-130. doi:10.1115/DETC2012-70781.

Flexural behavior of printed circuit boards (PCB) is well known for the major failure mechanism under board level or product level mobile phone drop tests. This behavior induces high peeling stress between PCB and IC package. This stress causes failure including both solder joint crack and pad cratering, which leads to malfunction such as phone dead or power off. Therefore, for a more reliable mobile phone design, it is important to accurately predict behavior of the PCB. In the past, isotropic or orthogonal linear elastic model have been used for simulating PCB in finite element analysis. Also, since PCB consists of multiple layers with woven glass fiber epoxy resin composite (FR-4) and copper foils, a multilayered PCB model was developed in order to consider material properties that change along the different plies. In this paper, the isotropic elastoplastic model was employed in order to efficiently predict behavior of PCB. Tensile and flexural test of PCB were conducted initially to evaluate mechanical characteristics and obtain representative material properties. Then, simulation of flexural test was performed to develop the finite element modeling. Finally, a drop test of mobile phone adopted with PCB bare board, which did not include IC packages, was examined. Also, the strain gage was used for measuring strain of PCB. This result was compared with drop simulation results of mobile phone, which used finite element modeling suggested. In conclusion, from an industry standpoint, finite element modeling of PCB using isotropic elasoplastic model was useful and efficient.

Commentary by Dr. Valentin Fuster
2012;():131-136. doi:10.1115/DETC2012-70787.

Recently, product manufacturers of mobile phone have to meet to high demands that allow their products endure variable bad environments exposed throughout the customers’ use. Especially, mobile phones need to resist to high humidity and general waterproofing that is prevalent in everyday usage. However, most previous studies related to seal material or methods for sealing assembly have been focused on heavy industry or automotive industry. In this paper, the aim is to predict waterproof possibility of mobile phone by using finite element analysis in design step. The criteria of waterproof condition for mobile phones was based on IEC60529 IPX-7 level[1]. This paper studied behavior characteristics and properties of the specialized rubber material for sealing the mobile phone housing. Constitutive equation for specialized seal rubber material was applied to the 3rd order Ogden function. Then, the correlation with test and finite element model was studied. Using the correlated finite element model for specialized seal rubber, evaluated behavior characteristics for seal rubber 2D shape and studied waterproof possibility analysis 3D model of mobile phone.

The proposed suggestion is expected to predict waterproof possibility for mobile phone efficiently. Evaluation method of 2D finite element model will be useful for decision design specification of seal rubber shape at preceding design step. And evaluation method of 3D finite element model will predict waterproof possibility before tooling the mold and will save the costs at development step in industry.

Commentary by Dr. Valentin Fuster
2012;():137-146. doi:10.1115/DETC2012-70804.

Some 3D complex shapes such as stair and compound-hole in engineering design always contains various geometric face types, adjacent geometric constraints and typical design semantics. These 3D shapes can be defined as template design features because they are frequently reused for synchronous shape modeling to greatly accelerate design and analysis process. To recognize the complex template design features, we present a new multi-level attributed graph based shape matching approach. The core idea is the Multi-level Attributed Graph (MLAG), which describes not only the geometry face types such as plane and the geometry relationships between two adjacent faces such as tangency that are usually used for 3D shape retrieval and manufacturing feature recognition, but also the design semantic features among multiple adjacent faces even non-adjacent faces such as boss. Such design semantic feature will greatly benefit MLAG matching process, so that a complex template design feature can be efficiently recognized for design modification. Finally, two experiments with some kinds of template feature recognition are also shown to demonstrate the effectiveness of the proposed method.

Topics: Design , Shapes
Commentary by Dr. Valentin Fuster
2012;():147-156. doi:10.1115/DETC2012-70856.

Hexahedral mesh generation is difficult and time-consuming. To avoid the complicated hexahedral mesh regeneration after each variational design, hexahedral mesh editing can be used. In this paper, an accurate hexahedral mesh cutting approach using geometric model is proposed, and the part of the geometric model inside the mesh model can be complex and arbitrary. In the approach, all the newly added geometric entities resulted from mesh cutting are first generated by performing the subtraction operation between the mesh model and the geometric model. Then, for each newly added geometric element, including point, edge and face, the mesh nodes that need to be moved onto it, are determined and repositioned with the mesh quality considered. To ensure the rationality of mesh nodes determination, for each newly added edge, the mesh nodes are identified using shortest path algorithm. Finally, the mesh elements that should not be in the resultant mesh model are deleted, Pillowing and Smoothing operations are further conducted to improve the mesh quality. Some preliminary results are given to show the feasibility of the approach.

Topics: Cutting
Commentary by Dr. Valentin Fuster
2012;():157-163. doi:10.1115/DETC2012-70893.

Extended finite element method (or XFEM) locally enriches the finite element solution using a priori known analytical solution. XFEM has been used extensively in fracture mechanics to compute stress concentration at crack tips. It is a mesh independent method that allows crack to be represented as an equation instead of using the mesh to approximate it. When this approach is used along with Implicit Boundary Finite Element Method (IBFEM) to apply boundary conditions, a fully mesh independent approach for studying crack tip stresses can be implemented. An efficient scheme for blending the enriched solution structure with the underlying finite element solution is presented. A ramped step function is introduced for modeling discontinuity or a crack within an element. Exact analytical solution is used as enrichment at the crack tip element to obtain the stress intensity factor (SIF) directly without any post processing or contour integral computation. Several examples are used to study the convergence and accuracy of the solution.

Commentary by Dr. Valentin Fuster
2012;():165-176. doi:10.1115/DETC2012-71055.

This paper is describing the current status of ongoing work on developing a comprehensive modeling and simulation infrastructure capable of addressing the multiphysics behavior aspects of rough surfaces in contact. The electrical and thermal response of bodies in contact under the influence of mechanical load electric currents and thermal fluxes, is a topic of interest for many application areas. We are presenting a multiscale theory leading to derivations of expressions of electric and thermal conductivities for the case of static contact. The associated model contains both an asperity based comprehensive model as well as its continuum level coupling. The mechanical pressure and the repulsion effect from electric current through the micro-contacts as well as temperature and strain rate dependence of the plastic behavior of the asperity are accounted for as well. This formalism enables the derivation of physical properties from surface topography and bulk material properties for the interface between two rough surfaces in contact. Numerical analysis results present the dependence of the derived properties from the surface characteristics applied external load and the electric current.

Commentary by Dr. Valentin Fuster
2012;():177-186. doi:10.1115/DETC2012-71193.

Double skin facades (DSF) provide a means of enhancing the energy saving capabilities of buildings. By being able to respond dynamically to changing ambient conditions using natural ventilation, shading devices, and/or thermal insulation devices or strategies, DSFs are being incorporated into modern architecture and even retrofitted in some older structures to reduce the energy required to balance the load input into the building. Utilizing a general building model and weather conditions and integrating various designs for DSFs, a comparative study can be made to support or oppose the different designs changes being made. The analysis of the set-up will be performed by Fluent, a computational fluid dynamics (CFD) software. Fluent will solve for the Navier-Stokes equations and turbulent flow using the finite volume method. These results show that the energy necessary to power the HVAC system decreases with certain configurations.

Commentary by Dr. Valentin Fuster
2012;():187-196. doi:10.1115/DETC2012-71308.

Although a large number of crash tests have been performed between passenger cars and rigid fixed traffic signs, the number of real tests focusing on crashworthiness of portable roll-up signs is still limited. Because a standard, portable roll-up sign contains at least three kinds of dissimilar materials, such as steel for the base, fiberglass for the batten, vinyl for the sign, and because the sign’s configuration is more complicated than a rigid fixed sign, it is important to simulate the behavior of portable roll-up signs in collision. In this paper, a fine-mesh finite element model precisely representing the portable roll-up sign was created and used together with a car model to simulate the process of impact with 0 and 90 degree orientation. The simulation was performed using LS-DYNA software. Techniques for creating the finite element model were discussed. Afterwards this finite element model, being validated and verified through real tests, can be used for parametric and/or robust design.

Commentary by Dr. Valentin Fuster
2012;():197-206. doi:10.1115/DETC2012-71508.

Analyzing complex 3D assembly models using finite element analysis software requires suppressing parts that are not likely to influence the analysis results, but may significantly improve the computational performance during the analysis. The part suppression step often depends on many factors within the context and application of the model. Currently, most analysts perform this step manually. This step can take a long time to perform on a complex model and can be tedious in nature. In this paper, we present an approach to multi-part suppression based on the specified criteria. We have developed utilities in Pro/Engineer CAD system to identify parts that meet the specified criteria and suppress them. We present several examples to illustrate the value of the proposed approach.

Commentary by Dr. Valentin Fuster
2012;():207-215. doi:10.1115/DETC2012-71518.

High strength steel is widely used in the manufacturing of parts dealing with heavy cyclic loads and corrosive environments. However, processing this type of steel is not easy, and it becomes a hard-to-solve problem when the part to produce is large, thick and quasi-unique. One example of a thick high strength steel axisymmetric part is the conical shape of the crown of a Francis turbine runner. Some Francis turbine runners installed in the dam basement of a hydraulic power plant are 10 meters in diameter with more than 5 meters in height, while plate thickness can exceed 100 millimeters. Several processes can be envisaged for the manufacturing processes of such large parts (welding or casting…), but few processes can deliver one within a reasonable time and at competitive cost. Among them the roll bending process, causing plastic deformation of a plate around a linear axis with little or no change in plate thickness, is considered as an interesting alternative.

The main objective of this research is to assess 3D dynamic finite element and analytical models for the computation of the bending forces during the manufacturing of hollow conical parts made of a thick plate and a high strength steel. Numerous parameters such as thickness, curvature, part size, material properties and friction directly influence the reaction forces on the rolls. Therefore, the results of this research provide a better understanding of the phenomena taking place in the process, and an opportunity to establish relationships between the bending forces and the parameters of a final conical part.

Commentary by Dr. Valentin Fuster
2012;():217-222. doi:10.1115/DETC2012-71549.

Absence of reliable analytical tools to model thermoforming of automobile headliners leads to lack of understanding of critical relationships between design, material and process, resulting in large proportions of scraps and in some cases, over engineering. Thus, a Finite Element Analysis (FEA) of thermoforming of a composite sheet material has been formulated and implemented. An homogenization approach is suggested, in which the layered composite is treated to behave as a single material. By mapping key process variables into analysis, a simulation has been successfully carried out on a headliner prototype development tool. Results obtained with such development tool show good correlation to key headliner defects such as wrinkles and failures.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Computer-Aided Product and Process Development, General

2012;():223-230. doi:10.1115/DETC2012-70109.

Assembly time estimation is an important aspect of mechanical design and is important for many users throughout the life-cycle of a product. Many of the current assembly time estimation tools require information which is not available until the product is in the production phase. Furthermore, these tools often require subjective inputs which limit the degree of automation provided by the method. The assembly of a vehicle depends on information about the product and information describing the process. The research presented in this paper explains the development and testing of an assembly time estimation method that uses process language as the input for the analysis.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2012;():231-236. doi:10.1115/DETC2012-70245.

Computer Aided Process Planning (CAPP) links the design and manufacture of a machined product defining how the product itself will be manufactured. Decisions made during this phase can have a significant impact on product cost, quality and build time; therefore, it is important that process planners have intuitive tools to aid them in effectively creating process plans. However, in spite of being a strong research area, the actual application of CAPP systems in industry is limited and new modern 3D digital tools in this area have not been researched to any real degree.

Traditional process planning is carried out either manually or via a CAPP interface and, from this activity, a set of instructions are generated for the shop floor. However, these CAPP processes can be time consuming and subject to inconsistencies. Current research seeks to automate the generation of work instructions by using previous designs and/or artificial intelligence. However, due to the complexity of manufacturing a wide range of products, the limited range of tools available and differing skills of the workforce, it is difficult to reach a generic solution for practical application.

The novel pilot study given in this paper presents one of the first pieces of research comparing and contrasting a traditional manual approach to machined part process planning with an alternative haptic virtual environment. Within this, an operator can simulate the machining of a simple part using a virtual drilling and milling process via a haptic routing interface. All of the operator input is logged in the background with the system automatically generating shop floor instructions from this log file.

Findings show that users found the virtual system to be more intuitive and required less mental workload than traditional manual methods. Also their perceptions for the future were that they would need less support for learning and would progress to final planning solutions more quickly.

Commentary by Dr. Valentin Fuster
2012;():237-246. doi:10.1115/DETC2012-70373.

This paper deals with the development of a computational framework that aims to facilitate innovative endeavours during product development. A problem analysis carried out revealed that both stakeholder collaboration and artefact simulation are considered important for supporting innovation, the first through the availability of more ideas and the latter for analysing the feasibility of these ideas prior to their implementation in practice. However, it was observed that the use of collaboration tools and simulation tools is still lacking in practice mainly due to lack of awareness or perceived lack of necessity. This research thus aims to develop a framework that helps support innovation in product development through the use of these tools. From the preliminary evaluation of the solution developed, it was found that the portal and its underlying framework are indeed useful for supporting innovation.

Commentary by Dr. Valentin Fuster
2012;():247-255. doi:10.1115/DETC2012-70408.

Effective and efficient processes characterize successful companies in mechanical engineering and related industrial sectors. Methods and tools of Virtual Prototyping and Simulation (VPS) become more and more accepted in these processes. The integration of these methods and tools into the processes and the PDM/PLM infrastructure of the company is a success factor in the development of complex technical systems. Especially small and medium sized enterprises (SMEs) often run isolated applications that are insufficiently integrated into the development process and the PDM/PLM infrastructure.

This paper introduces the VPS-Benchmark — an adaptable maturity model for performance evaluation and improvement in small and medium-sized enterprises (SMEs) with focus on VPS. By giving an overview on existing approaches for performance evaluation and improvement, we point out the demand for a new maturity approach meeting the requirements of SMEs. The new maturity model will then be introduced. After explaining the basic concept of the model, we describe its development and application. The model offers a systematic performance improvement by giving concrete measures. These measures are derived from the comparison of current performance and a company-specific target state. This individual target state is determined by the company-class. The whole approach is based on a software-supported questionnaire that allows for a self-assessment in SMEs.

Commentary by Dr. Valentin Fuster
2012;():257-265. doi:10.1115/DETC2012-70573.

The optimization of mixed-integer problems is a classic problem with many industrial and design applications. A number of algorithms exist for the numerical optimization of these problems, but the robust optimization of mixed-integer problems has been explored to a far lesser extent. We present here a general methodology for the robust optimization of mixed-integer problems using Non Uniform Rational B-spline (NURBs)-based metamodels and graph theory concepts. The use of these techniques allows for a new and powerful definition of robustness along integer variables. In this work, we define robustness as an invariance in problem structure, as opposed to insensitivity in the dependent variables. The application of this approach is demonstrated on two test problems. We conclude with a performance analysis of our new approach, comparisons to existing approaches, and our views on the future development of this technique.

Topics: Optimization
Commentary by Dr. Valentin Fuster
2012;():267-275. doi:10.1115/DETC2012-70576.

In order to conduct engineering analysis efficiently, complex CAD model is generally idealized by dimension reduction of its local thin regions into mid-surfaces, which results in a mixed-dimensional model. However, such dimension reduction inevitably induces analysis errors when plate or shell theory applied to the mixed-dimensional model.

In this paper, an evaluation indicator is proposed for estimating analysis error induced by dimension reduction of a original model into mixed-dimensional model and used to control the analysis results of the mixed-dimensional model with given accuracy. The evaluation indicator is defined as the stress difference on the coupling interface between the mixed-dimensional model and the original model. When the mixed-dimensional model is analyzed, p-version solid elements were generated by offsetting the shell nodes in the thickness direction. Moreover, element stiffness matrix, boundary conditions and material properties can be extracted from the analysis results and reused for the indicator computation. Displacements of the mixed-dimensional model are input as initial value to iterative solver to accelerate the computation. When the indicator is below the accuracy, final analysis can be proceeded with p-adaptivity in the thin regions. The hierarchical shape function for p-version solid elements ensures the efficiency of the error estimation and the reliability of the final analysis. The robustness of the evaluation indicator and computational efficiency for final analysis are illustrated by experiments on engineering models.

Topics: Dimensions , Errors
Commentary by Dr. Valentin Fuster
2012;():277-286. doi:10.1115/DETC2012-70604.

In complex product development, coordination is the act of managing dependencies between artifacts. Socio-technical coordination is the achievement of coordination through the alignment of organizational structures and product structures. Socio-technical coordination is achieved in hierarchical product development organizations by aligning the organizational structure with the system architecture. However, within virtual community-based product development such as open source development, the organizational structure is not designed by a central authority. In contrast, the community evolves as a result of participation of individuals and their communication with other individuals working on the project. Hence, understanding and quantifying socio-technical coordination is particularly important in open-source communities.

Existing approaches to measuring socio-technical coordination are based on the congruence between ideal communication and the actual communication structures within communities. The primary limitation of existing approaches is that they only account for explicit communication between individuals. Existing measures do not account for the indirect communication between individuals and the shared knowledge that individuals working on a joint project possess. Due to these limitations, the socio-technical coordination values have been observed to be very low in the existing literature. We propose two alternate approaches to measuring socio-technical coordination based on clustering techniques. We illustrate the approaches using a case study from an open source software development community. The proposed approaches present a broader and more encompassing view of coordination within open source communities.

Commentary by Dr. Valentin Fuster
2012;():287-296. doi:10.1115/DETC2012-70712.

In our earlier work we have proposed a collaboration system for modular product design. One of the main components of the system is a design repository to which suppliers can upload their component descriptions using machine-readable, interface-based component description language, so that manufacturers can refer to the descriptions during product design phases. A mathematical formulation for modular product design has been proposed based on Artificial Intelligence Planning framework. The proposed Binary Integer Programming formulation generates the optimal design of a product. The optimal design consists of multiple components that are compatible with each other in terms of input and out interfaces. However, the mathematical approach is faced with scalability issue. The development of a heuristic algorithm that generates a high quality solution within a reasonable amount of time is the final goal of the research. In this paper, we propose an algorithmic approach based on branch-and-bound method as an intermediate step for the final goal. This paper describes the details of the proposed branch-and-bound algorithm using a case study and experimental results are discussed.

Commentary by Dr. Valentin Fuster
2012;():297-306. doi:10.1115/DETC2012-70760.

Current design feature recognition mainly depends on the connective attributes of edges or faces in the CAD models, such as convexity, concavity, and tangency. However, it is difficult to uniquely define the mixed connective attributes of the generic features in some cases. A novel generic design feature recognition approach by detecting the hint of topology variation is presented in this study. The core idea includes: 1) the resulting CAD model of a complex part is regarded as formed from an initial basic shape such as roughcast and has been operated by introducing generic design features, which subsequently may cause topology variation; 2) Such topology variations, e.g. vertex elimination, edge partition and face alteration, are utilized to obtain generalized properties of the generic design features, dispensing with the connective attributes.

Finally, 1) we demonstrate in the experiments that the approach successfully recognizes the main types of generic design features, both isolate and hybrid features. 2) Furthermore, we exhibit the application of the approach in some engineering examples.

Topics: Design , Topology
Commentary by Dr. Valentin Fuster
2012;():307-313. doi:10.1115/DETC2012-70778.

Automated packing algorithms for luggage compartments in automobiles are of great interest. The difficulty of automatically computing the volume of a mesh representation of a boot according to the ISO 3832 standard restricts the design of vehicles required to meet minimal trunk volume specifications and also increases the cost of physical and virtual verification of the original design specifications. In our paper we present a new heuristic combinatorial packing algorithm for the ISO luggage packing standard. The algorithm presents numerous advantages over previous algorithms in terms of its simplicity and speed as well as producing high density of packed objects. The algorithm also solves the problem of requiring a fixed grid structure to position discrete objects in the boot and can also be used as an additional optimization on existing algorithms. In addition, we also provide the first comparison of state of the art packing algorithms for a simplified trunk geometry and propose a standard trunk geometry to enable future researchers to compare their results with other packing algorithms.

Commentary by Dr. Valentin Fuster
2012;():315-328. doi:10.1115/DETC2012-70780.

The rise of cloud computing is radically changing the way enterprises manage their information technology (IT) assets. Considering the benefits of cloud computing to the information technology sector, we present a review of current research initiatives and applications of the cloud computing paradigm related to product design and manufacturing. In particular, we focus on exploring the potential of utilizing cloud computing for selected aspects of collaborative design, distributed manufacturing, collective innovation, data mining, semantic web technology, and virtualization. In addition, we propose to expand the paradigm of cloud computing to the field of computer-aided design and manufacturing and propose a new concept of cloud-based design and manufacturing (CBDM). Specifically, we (1) propose a comprehensive definition of CBDM; (2) discuss its key characteristics; (3) relate current research in design and manufacture to CBDM; and (4) identify key research issues and future trends.

Topics: Manufacturing , Design
Commentary by Dr. Valentin Fuster
2012;():329-338. doi:10.1115/DETC2012-70814.

The present work sets the basis for the development of a systematic eco-sustainable computer aided design procedure. During years modules for the specific design of all phases of a product life-cycle have been developed: manufacturing, assembly, reliability, maintainability, resilience, disassembly, end-of-use, etc.

This work instead places itself in a broader context of “Design for ecology”. It means to overcome the limits of a classic design process such as “Design for X”, through an approach that makes the designer aware of the consequences that each design modification (to the geometry, the material or the manufacturing process) determines on the environmental impact of the product entire life-cycle.

This work proposes a method of integrated management of (1)virtual prototyping software (such as structural optimization, FEM and CAD), (2) function modelling methodology and (3) LCA tools. It is mainly based on the configuration of structural optimization strategies especially conceived to obtain lighter and more compact products, therefore, more eco-sustainable. The method, due to the nature of the instruments it employs, can be applied only to products that can be modelled in a CAD environment. The article, in particular, shows how to articulate the workflow between virtual prototyping and LCA tools.

A case study regarding a moped rim is used to explain the procedure while software implementation is still underway.

Commentary by Dr. Valentin Fuster
2012;():339-347. doi:10.1115/DETC2012-70821.

Advancements in the simulation of electrostatic spray painting make it possible to evaluate the quality and efficiency of programs for industrial paint robots during Off-Line Programming (OLP). Simulation of the spray paint deposition process is very complex and requires physical simulation of the airflow, electric fields, breakup of paint into droplets, and tracking of these droplets until they evaporate or impact on a surface. The information from the simulated droplet impacts is then used to estimate the paint film thickness. The current common way of measuring paint thickness on complex geometrical shapes is to use histogram based methods. These methods are easy to implement but are dependent on good quality meshes. In this paper, we show that using kernel density estimation not only gives better estimates but it also is not dependent on mesh quality. We also extend the method using a multivariate bandwidth adapted using estimated gradients of the thickness. To show the advantages of the proposed method, all three methods are compared on a test case and with real thickness measurements from an industrial case study using a complex automotive part.

Commentary by Dr. Valentin Fuster
2012;():349-356. doi:10.1115/DETC2012-70872.

This paper analyzes the impact of 3D model annotations on CAD user productivity in the context of the New Product Development Process. These annotations can provide valuable information to support an improved design intent communication. Comparably, they can play the same role as source code comments to support code maintainability in software engineering. A 3D CAD model is a geometry representation and it also stores the modeling strategy used to build it. Alteration of a complex CAD model usually represents a time consuming task due to the lack of an explicit explanation of the design rationale followed to build that 3D model. An experimental study conducted with Spanish and Mexican CAD students indicates that it is possible to reduce the time needed to perform engineering changes in existing models by between 13–26% by using annotations. Also some factors that affect the impact of annotations on the engineering change process such as part and alteration complexity were identified.

Commentary by Dr. Valentin Fuster
2012;():357-366. doi:10.1115/DETC2012-70890.

A novel 3D dental identification framework is presented. The objective is to develop a methodology to enable computer-automated matching of complex dental surfaces with possible missing regions for human identification. Thus far, there is no reported attempt at 3D dental identification given partially available dental casts or impressions. This approach overcomes a number of key hurdles in traditional 2D methods. Given the 3D digital form of a dental cast surface, the developed method will facilitate the search for the closest match in the database of digitized dental casts. A salient curvature matching algorithm (SCM) is proposed for pose estimation which includes algorithms for feature extraction, feature description and correspondence. The feature point extraction algorithm could extract more salient features and the correspondence algorithm is more robust for pose estimation compared to known works. Experimental results show 85.7% hit rate at rank-1 accuracy based on matching of 7 partial sets to a database of 100 full sets in significantly reduced retrieval time. The hit rate increases to 100% with parameter adjustment. This work aims to enable computer-aided 3D dental identification and the proposed method could be adjunctively used with the traditional 2D dental identification method, as the available dental source for identification is still primarily 2D radiographs. Limitations of the methodology and future directions in matching highly fragmented and partial dental surfaces are discussed.

Commentary by Dr. Valentin Fuster
2012;():367-374. doi:10.1115/DETC2012-70923.

A new springback compensation system based on springback mechanics is developed. The cause of springback is the bending moment before springback. To compensate the tool shape, it is necessary to satisfy the springback mechanical equation which is expressed by curvature and bending moment. In this system, the bending moment before springback is designed considering the springback mechanical equation. Actually, the optimum bending moment distribution is designed by modifying the design parameters such as curvature and punch load in this system so that the metal sheet is deformed into the desired shape. In the modification process of the design parameters such as curvature and punch load, the sequential response surface method is used to optimize them. When the proposed method was applied to V-bending, it was found that it was effective because the convergence rate is great and not only the curvature but also the punch load can be obtained.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2012;():375-382. doi:10.1115/DETC2012-70940.

Due to the varieties of available Additive Manufacturing (AM) technologies, it is challenging to select appropriate processes to cost effectively build a part, especially when a user does not have in-depth knowledge about AM. In this paper, we introduce approximate models of build time and cost for AM processes that can be used for early stage process selection. Therefore, a user can identify and compare candidate manufacturing processes based on build time and cost estimates that are computed from approximate geometric information about the part, specifically the part’s bounding box and its estimated volume. The build time model is based on a generalized parameterization of AM processes that applies to laser-based scanning (Stereolithography, powder bed fusion), filament extrusion (fused-deposition modeling), ink-jet printing, and mask-projection processes. Build time estimates were tested by comparing them to the measured build time of parts in fabricated using Stereolithography, ink-jet printing, and fused deposition modeling processes.

Commentary by Dr. Valentin Fuster
2012;():383-392. doi:10.1115/DETC2012-70995.

The work in this paper uses neural networks to develop a relationship model between assembly times and complexity metrics applied to defined mate connections within SolidWorks assembly models. This model is then used to develop a Design for Assembly (DFA) automation tool that can predict a product’s assembly time using defined mate connections within SolidWorks assembly models. The development of this new method consists of: creating a SolidWorks (SW) Add-in to automatically extract the mate connections from SW assembly models, parsing the mate connections into graphs, implementing a new complexity training algorithm to predict assembly times based on mate graphs, and evaluating the effectiveness of the new method. The motivation, development, and evaluation of the new automated DFA method are presented in this paper. Ultimately, the method that is trained on both fully defined and partially defined assembly models is shown to provide assembly time prediction results that are typically within 25% of target time, but with one outlier at 95% error, suggesting that a more robust training set is needed.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2012;():393-402. doi:10.1115/DETC2012-71118.

This paper describes the design, implementation, and usage of a tablet based application as a mechanism for concept exploration and function realization in early stage design. Specifically, this work reports on the transformation of a methodology, known as Form Follows Form, into an interactive, multi-touch iPad application that can be used to explore concept alternatives and as tool to increase student awareness and recognition of functionality. Users are able to specify an initial concept to an engineering problem by dragging, dropping, and connecting basic components that they envision as a potential solution. The application then leverages a database of reverse engineered products and algorithms to abstract the underlying functionality of the user specified solution. Students can use the abstracted functionality as a baseline for generating concept alternatives on their own, and also, explore concept and component alternatives suggested by the application. The goal of the application is to reduce design fixation and to provide multiple approaches for concept exploration activities.

Topics: Design , Teaching
Commentary by Dr. Valentin Fuster
2012;():403-409. doi:10.1115/DETC2012-71133.

Quality is a key element to success for any manufacturer, and the fundamental prerequisite for quality is measurement. In the discrete parts industry, quality is attained through inspection of parts but typically there is a long latency between machining, quality measurement and part/process assessment. Since manufacturing systems are by their nature imperfect, it is imperative to indentify and rectify out-of-tolerance processes as soon as possible. Rapid quality feedback into the factory operation is not a complex concept, however, the collection and dissemination of the necessary measurement data in a timely and tightly integrated manner is challenging. This paper discusses Web-enabled, real-time quality data based on the integration of MTConnect and quality measurement reporting data. MTConnect is an open factory communication standard that leverages the Internet and uses XML for data representation. The quality data is represented in MTConnect as XML to represent Geometric Dimensioning and Tolerancing (GD&T) output results. A pilot implementation to produce Web-enabled, real-time quality results in a standard MT-Connect XML representation from Coordinate Measuring Machine (CMM) inspections will be discussed.

Topics: Feedback
Commentary by Dr. Valentin Fuster
2012;():411-424. doi:10.1115/DETC2012-71483.

This paper presents a new approach for tele-fabrication where a physical object is scanned in one location and fabricated in another location. This approach integrates three-dimensional (3D) scanning, geometric processing of scanned data, and additive manufacturing technologies. In this paper, we focus on a set of direct geometric processing techniques that enable the tele-fabrication. In this approach, 3D scan data is directly sliced into layer-wise contours. Sacrificial supports are generated directly from the contours and digital mask images of the objects and the supports for Stereolithography Apparatus (SLA) processes are then automatically generated. The salient feature of this approach is that it does not involve any intermediate geometric models such as STL, polygons or non-uniform rational B-splines that are otherwise commonly used in prevalent approaches. The experimental results on a set of objects fabricated on several SLA machines confirms the effectiveness of the approach in faithfully tele-fabricating physical objects.

Commentary by Dr. Valentin Fuster
2012;():425-437. doi:10.1115/DETC2012-71523.

In this research, we investigate the shrinkage related deformation control for a mask-image-projection-based Stereolithography process (MIP-SL). Based on a Digital Micromirror Device (DMD), MIP-SL uses an area-processing approach by dynamically projecting mask images onto a resin surface to selectively cure liquid resin into layers of an object. Consequently, the related additive manufacturing process can be much faster with a lower cost than the laser-based Stereolithography Apparatus (SLA) process. However, current commercially available MIP-SL systems are based on Acrylate resins, which have bigger shrinkages than epoxy resins that are widely used in the SLA process. Consequently controlling size accuracy and shape deformation in the MIP-SL process is challenging. To address the problem, we evaluate different image exposing strategies for projection mask images. A mask image planning method and related algorithms have been developed for the MIP-SL process. The planned mask images have been tested by using a commercial MIP-SL machine. The experimental results illustrate that our method can effectively reduce the deformation by as much as 32%. A discussion on the test results and future research directions are also presented.

Commentary by Dr. Valentin Fuster
2012;():439-447. doi:10.1115/DETC2012-71548.

We present an approach for producing complex nanoscale patterns by integrating computer-aided design (CAD) geometry processing with an atomic force microscope (AFM) based nanoindentation process. Surface modification is achieved by successive nano-indentation using a vibrating tip. By incorporating CAD geometry, this approach provides enhanced design and patterning capability for producing geometric features of both straight lines and freeform B-splines. This method automatically converts a pattern created in CAD software into a lithography plan for successive nanoindentation. For ensuring reliable lithography, key machining parameters including the interval of nanoindentation and the depth of nanogrooves have been investigated, and a proper procedure for determining the parameters has been provided. Finally, the automated nanolithography has been demonstrated on poly methylmethacrylate samples. It shows the robustness of the CAD integrated, AFM based nanoindentation approach in fabricating complex patterns.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Computer-Aided Tolerance Analysis

2012;():449-459. doi:10.1115/DETC2012-70398.

We present elegant algorithms for fitting a plane, two parallel planes (corresponding to a slot or a slab) or many parallel planes in a total (orthogonal) least-squares sense to coordinate data that is weighted. Each of these problems is reduced to a simple 3×3 matrix eigenvalue/eigenvector problem or an equivalent singular value decomposition problem, which can be solved using reliable and readily available commercial software. These methods were numerically verified by comparing them with brute-force minimization searches. We demonstrate the need for such weighted total least-squares fitting in coordinate metrology to support new and emerging tolerancing standards, for instance, ISO 14405-1:2010. The widespread practice of unweighted fitting works well enough when point sampling is controlled and can be made uniform (e.g., using a discrete point contact Coordinate Measuring Machine). However, we demonstrate that nonuniformly sampled points (arising from many new measurement technologies) coupled with unweighted least-squares fitting can lead to erroneous results. When needed, the algorithms presented also solve the unweighted cases simply by assigning the value one to each weight. We additionally prove convergence from the discrete to continuous cases of least-squares fitting as the point sampling becomes dense.

Topics: Fittings
Commentary by Dr. Valentin Fuster
2012;():461-468. doi:10.1115/DETC2012-71369.

Although allocation of design tolerances for parts and components is typically based on the prediction of geometric and dimensional deviations resulting by the inherent errors of production, this process cannot be conducted unconstrained. Concurrent to studying the manufacturing and assembly uncertainties in tolerance allocation, it is highly important to evaluate the total combination of the allocated tolerances and the deformations due to various loading on the final product. This ensures that parts and components in their working condition meet their essential requirements for functionality, form and fit. This process is optimized only if the minimum geometric zone that covers the evaluated deformations is studied properly. In addition, the minimum deformation zones for various types of loading in an assembly of parts and components need to be studied and the tolerances should be selected after considering the requirements for all possible events. Using this concept, a unified methodology is developed to find the optimum tolerances for the geometric parameters of mechanical structures which are under various loading condition. Validity of the developed procedure is studied by conducting case studies and variety of experiments. The developed methodology can be employed efficiently during detailed design process of mechanical parts and assemblies.

Commentary by Dr. Valentin Fuster
2012;():469-474. doi:10.1115/DETC2012-71418.

Layer-based manufactured parts and surfaces are inherently subject to stair case effect which can be quantified by cusp height. Cusp height of a layer is the maximum distance measured along a surface normal between the ideal surface and the produced layer. Although calculation of local cusp high is a simple task but estimating the overall deviation zone of the produced surface is a highly nonlinear and complicated problem. This paper presents a practical approach to predict the actual profile tolerances of the surfaces. This prediction is used to allocate profile tolerances for the rapid prototyping process. Also the methodology can be used to select the optimum uniform layer thicknesses that compromise between the number of layers and the desired accuracy of the final surfaces. The unified developed methodologies are capable to analyse complex surfaces and geometries. Variety of experiments is carried out to study the effectiveness and practicality of the presented methodology. The developed methodology can be employed efficiently during design of rapid prototyping parts.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Design Informatics

2012;():475-486. doi:10.1115/DETC2012-70050.

Various manifestations of products, prototypes and tools are commonly used in design research to discover and describe novel phenomena, or to test specific research theories, or to explore intrinsic data that cannot be accessed and validated otherwise. However, as research means, the above physical artifacts are over-detailed and inflexible, in particular when phenomena associated with design creativity and product ideation are investigated. To support design inclusive research in the context of conceptualization and early testing of complex, knowledge-intensive software tools, the authors propose modular abstract prototyping. The original goal of abstract prototyping was to demonstrate the real life processes established by new artifact-service combinations, as well as the interactions of humans with them in various application scenarios. A modular abstract prototype relies on a comprehensive information structure. The demonstration contents of the modules are defined by a stakeholder and purpose oriented logical dissecting of this information structure, and implemented as digitally recorded, multi-media enabled narrations and enactments. This paper discusses the technical aspects of developing modular abstract prototypes, and their use as flexible and evolving research means. A complex application example is presented in which modular abstract prototyping was used in focus group sessions to assess the conceptualization of a trade-off forecasting software tool by various stakeholders. This tool is being developed for forecasting energy saving and financial benefits that can be achieved by ubiquitous augmentation. The stakeholders have formulated positive opinion about the level of immersion and the articulation of informing that can be achieved by using modular abstract prototypes. Future research focuses on the development of a web-hosted engine for real-time interactive abstract prototyping in participatory research sessions.

Commentary by Dr. Valentin Fuster
2012;():487-496. doi:10.1115/DETC2012-70440.

As the rate of change in both business models and business complexity increases, enterprise architecture can be positioned to supply decision support for executives. The authors propose a dynamic enterprise architecture framework that supports business executive needs for rapid response and contextualized numerical decision support. The classic approaches to business decision making are both over simplified and insufficient to account for the dynamic complexities of reality. Recent failures of historically sound businesses demonstrate that a more robust mathematical approach is required to establish and maintain the alignment between operational decisions and enterprise objectives. We begin with an enterprise architecture (EA) framework that is robust enough to capture the elements of the business within the structure of a meta model that describes how the elements will be stored and tested for completeness and coherence. We add to that the analytical tools needed to innovate and improve the business. Finally, dynamic causal and agent layers are added to account for the qualitative and evolutionary elements that are normally missing or over simplified in most decision systems. This results in a dynamic model of an enterprise that can be simulated and analyzed to answer key business questions and provide decision support. We present a case study and demonstrate how the models are used within the decision framework to support executive decision makers.

Commentary by Dr. Valentin Fuster
2012;():497-507. doi:10.1115/DETC2012-70444.

This paper presents a taxonomy for project-level risk-mitigating actions developed from a large design organization’s risk database. The taxonomy classifies actions according to their purpose and how they are embodied. The taxonomy along with the results of actions recorded in the database can be used to evaluate the effectiveness of different types of risk-mitigating actions. A methodology for refining the taxonomy based on analyzing mismatches between different coders using the taxonomy is also given. Because the taxonomy is based on an existing legacy database, this paper discusses related issues such as missing contextual information.

Developing this taxonomy will lead to further advances in empirically evaluating the usefulness of different risk-mitigating actions. This will allow for better understanding and improved prediction of how different types of risk-mitigating actions affect a project’s eventual outcomes such as cost and schedule, leading to future advances in decision-making approaches of risk-mitigating actions in complex environments.

Topics: Design , Databases , Risk
Commentary by Dr. Valentin Fuster
2012;():509-518. doi:10.1115/DETC2012-70653.

We present a framework for modelling and analysis of real-world business workflows. We present a formalised core subset of the Business Process Modelling and Notation (BPMN) and then proceed to extend this language with probabilistic non-deterministic branching and general-purpose reward annotations. We present an algorithm for the translation of such models into Markov Decision processes expressed in the syntax of the PRISM model checker. This enables analysis of business processes for the following properties: transient and steady-state probabilities, the timing, occurrence and ordering of events, reward-based properties and best- and worst- case scenarios. We develop a simple example of medical workflow and demonstrate the utility of this analysis in accurate provisioning of drug stocks. Finally, we suggest a path to building upon these techniques to cover the entire BPMN language, allow for more complex annotations and ultimately to automatically synthesise workflows by composing predefined sub-processes, in order to achieve a configuration that is optimal for parameters of interest.

Topics: Workflow
Commentary by Dr. Valentin Fuster
2012;():519-528. doi:10.1115/DETC2012-70756.

Analysis of user preference is among the crucial tasks at early stages of new product development (NPD). In order to satisfy diversified user preferences in the market, product companies have struggled to design a variety of products to address different customer voices. In this context, product family design (PFD) is a widely adopted strategy to deal with such product realization needs. Besides preference diversity, uncertainty of user preference is another important aspect that can greatly affect product design and offerings especially when customer preferences are not clear, not fully identified, or have drifted overtime. Previously, we have studied an ontology-based information representation for PFD, which offers a modeling scheme to assist multi-faceted product variant derivation. In this paper, we explore how ontology can be further extended to handle user preference uncertainty by using a Bayesian network representation. Customer preference uncertainty is expressed as a probability of preference towards certain product attributes. An approach to construct a Bayesian network that harnesses the existing knowledge modeling from product family ontology is proposed. Based on such a network representation and preference modeling, we have derived several probabilistic measures to assess the propagation and impact of user preference uncertainty towards platform preference. A case study of platform analysis using four laptop computer families is reported to illustrate how preference uncertainty can affect the suitability and selection of existing product platform.

Commentary by Dr. Valentin Fuster
2012;():529-539. doi:10.1115/DETC2012-70833.

Patent literature contains over 70 million patent documents, so the amount of information available to companies and the opportunity to derive business value and market new products from this collection is huge. However, presently an effective information extraction is a difficult task because patentees typically write using their own lexicon, style and strategy in describing their inventions.

This paper presents a discussion about open problems and a way to overcome them by a new functional search based on Function-Behaviour-Physical effect-Structure ontology. This ontology is used for a technology transfer activity by patents, with the aim of making users aware of how technologies, not yet exploited in their own field, have already been patented in other domains and exactly for achieving their same desired goal. To reach this objective a multidisciplinary approach is proposed, combining design ontologies with information retrieval tools. A case study has been presented to demonstrate how the conceived framework is strategic to search for patents and automatically classify them according to the proposed ontology.

Topics: Patents
Commentary by Dr. Valentin Fuster
2012;():541-549. doi:10.1115/DETC2012-71084.

In the design process, the requirements serve as the benchmark for the entire product. Therefore, the quality of requirement statements is essential to the success of a design. Because of their ergonomic-nature, most requirements are written in natural language (NL). However, writing requirements in natural language presents many issues such as ambiguity, specification issues, and incompleteness. Therefore, identifying issues in requirements involves analyzing these NL statements. This paper presents a linguistic approach to requirement analysis, which utilizes grammatical elements of requirements statements to identify requirement statement issues. These issues are organized by the entity—word, sentence, or document—that they affect. The field of natural language processing (NLP) provides a core set of tools that can aid with this linguistic analysis and provide a method to create a requirement analysis support tool. NLP addresses requirements on processing levels: lexical, syntactic, semantic, and pragmatic. While processing on the lexical and syntactic level are well-defined, mining semantic and pragmatic data is performed in a number of different methods. This paper provides an overview of these current requirement analysis methods in light of the presented linguistic approach. This overview will be used to identify areas for further research and development. Finally, a prototype requirement analysis support tool will be presented. This tool seeks to demonstrate how the semantic processing level can begin to be addressed in requirement analysis. The tool will analyze a sample set of requirements from a family of military tactical vehicles (FMTV) requirements document. It implements NLP tools to semantically compare requirements statements based upon their grammatical subject.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Emotional Engineering

2012;():551-558. doi:10.1115/DETC2012-70059.

The purpose of this research is to develop a system for evaluating product shape, from the viewpoint of aesthetics and packaging. In the previous work, we developed a robust design method to generate a product image based on customers’ feelings, and we quantitatively evaluated the shape based on the principles of the Taguchi method. The underlying idea is that an attractive product with an aesthetically pleasing shape should enable efficient packaging. In this paper, we analyze the packaging efficiency of three different cross-section shapes (circle, square and rectangle) of an aesthetic PET bottle. The packaging efficiency is evaluated based on comparing the space required to contain bottles with the space available in the packaging box. The paper uses the assumption that the space required to contain bottles depends on the space to contain one bottle, and the space to contain one bottle depends on the largest cross-section of the bottle body. The system can give designers suggestions about the packaging efficiency of the designed model.

Topics: Shapes , Packaging
Commentary by Dr. Valentin Fuster
2012;():559-566. doi:10.1115/DETC2012-70186.

A surface texture is a common design factor that affects a customer’s sensory perception of product quality. Customers perceive a surface quality using multiple sensory modalities, for example, vision and touch, and switch them through an interaction with a product, for example, a transition from vision to touch. Between such sensory modality transitions, human beings often predict subsequent modal perceptions using a prior modality, for example, predicting the tactile quality of a product from its appearance before actually touching it. We believe that a disconfirmation between prediction using a modality and an experience using another modality affects a perceived quality. In this paper, we propose a method to evaluate the quality of a surface texture with attention to the effects of a disconfirmation between a prior visual prediction and posterior tactual experience. To identify the textural factors contributing to such an effect, we conducted a sensory evaluation experiment with combinations of visual and tactile texture samples that were synthesized using a half-mirror. We demonstrate the appropriateness of the method with analysis of the results of an experiment using fourteen plastic texture samples having different textures that are commonly used in a product design.

Commentary by Dr. Valentin Fuster
2012;():567-572. doi:10.1115/DETC2012-70263.

In this study, we focus on the facial expression of a deformed avatar and develop an average facial color image avatar system based on dynamic facial color and expression to facilitate the analysis by synthesis of an affect display. This is achieved by controlling the parameters of a proposed dynamic facial color model and analyzing the extent to which dynamic facial color with emphasized color tones is suitable for the representation of a laugh. The effectiveness of the system is demonstrated by sensory evaluation. The application of this system in emotional interaction could support psychological experiments related to perception and recognition, based on dynamic facial color and expression.

Commentary by Dr. Valentin Fuster
2012;():573-580. doi:10.1115/DETC2012-70296.

Expectation management in product engineering design aims at setting achievable goals for both customers and designers, while leaving room for creativity and passion. This is especially challenging in the global workplace. Using an example of a design project, the Dental Headrest project (DHR), this paper reviews how expectations were managed in a successful, collaborative project between the University of Tokushima (UT) and Massachusetts Institute of Technology (MIT).

The goal of the project was to design an innovative mechanism for the positioning a dental chair headrest so satisfy both the needs of a patient for comfort and a clinician for flexibility and access. The design team was formed with six students from the MIT MechE’s Precision Machine Design class, while the challenge proposed by a UT team of dentists and design engineers.

The team followed a deterministic design procedure inducing understating the challenge and reviewing prior art, strategy and concept generation, detailed module design and fabrication and testing, culminating in presentation and documentation. Through the process was coordinated by online communication and collaborative working spaces which ensured real-time information transfer between the continents. The conclusion was a face-to-face meeting between the two institutions.

This DHR project resulted in an innovative design of headrest adjusting mechanism that was implemented in a prototype. Moreover, the students, faculty and clinicians benefitted from the experience of innovative design collaboration in a multidisciplinary, global team.

Topics: Design , Collaboration
Commentary by Dr. Valentin Fuster
2012;():581-589. doi:10.1115/DETC2012-70374.

In face-to-face communication, touch can establish intimacy, and therefore the presence of tactile stimulation can enhance the interpersonal relationships. While human-human interaction has been shifting from face-to-face physical conversations to electronically mediated form of communication, current technologies are not able to provide a multimodal sensorial experience that can support haptic interaction besides visual and auditory. Within the haptic research fields, affective haptics explore emotional interaction and perception mediated via touch that is simulated by technology. Besides, wearable technology and tangible interfaces can be employed as a solution to bridge the gap between the digital and physical worlds by making the body fully engaged with the interaction. This paper presents findings of a design practice that explores the avenues of affective tactile interaction through wearable technology, which can artificially produce tactile stimulations as medium for instant communication between two people. The findings are presented by the light of theoretical background, observations and analysis of the design practice.

Topics: Design , Haptics
Commentary by Dr. Valentin Fuster
2012;():591-601. doi:10.1115/DETC2012-70543.

Capturing users’ needs is critical in web site design. However, a lot of attention has been paid to enhance the functionality and usability, whereas much less consideration has been given to satisfy the emotional needs of users, which is also important to a successful design. This paper explores a methodology based on Kansei Engineering, which was significant used in product and industrial design but not quite been adopted in the IT field, in order to discover implicit emotional needs of users toward web site and transform them into design details. Survey, interview techniques and statistical methods were performed in this paper. A prototype web site was developed based on the Kansei study results integrated with technical expertise and practical considerations. The results showed that the Kansei Engineering methodology, in this paper, played a significant role in web site design in terms of satisfying the emotional needs of users.

Topics: Design
Commentary by Dr. Valentin Fuster
2012;():603-610. doi:10.1115/DETC2012-70595.

According to the AAA Foundation for Traffic Safety, driver inattention is a major contributor to highway accidents. Driver distraction is one form of inattention and a leading factor in most vehicle crashes and near crashes. Distraction occurs when a driver is delayed in the recognition of information needed to safely accomplish the driving task because some event, activity, object, or person within or outside the vehicle compels or induces the driver attention away from the driving task.

Although some indexes of driving performance have measured distraction, they are the results of the distraction and not the distraction itself. We directly and quantitatively employ biological signals to measure the distraction by finding useful biological indexes from candidates of various biological signals. Our experimental results using a driving simulator showed useful indexes derived from EEG and ECG.

Topics: Signals
Commentary by Dr. Valentin Fuster
2012;():611-619. doi:10.1115/DETC2012-70628.

Although workforce productivity is widely used today, production is quickly moving toward product and process development with customers. Creative customers would like to get more and more involved in product development and furthermore, they would like to get satisfaction not only from the final product but from the processes as well. So we have to introduce a new measure for productivity, which focuses more on how much satisfaction a customer obtains from production. So the new definition of productivity in this sense will be:

Customer productivity = Amount of satisfaction / Customer’s psychological time and money (physical and virtual involvement in production)

This is different from the current customer satisfaction. Current one is focused on how much satisfaction a customer will have for a final product. This is a definition from the standpoint of the producer. The new definition is from the standpoint of the customer.

This paper points out that if we introduce Mahalanobis Taguchi System, such a measure can be established and we can introduce a new metric for measuring customer’s satisfaction for the new type of prosumer system or co-production.

Commentary by Dr. Valentin Fuster
2012;():621-626. doi:10.1115/DETC2012-70830.

Although there are many approaches to decision making, most of them are tools for the Closed World, and you can make decisions before you sail out. But the world we are living in now is an Open World, where there is no chart. We have to make decisions while we are navigating. Economists, Simon and Keynes, say that in a Closed World economic agents make decisions rationally, but in an Open World they rely on emotion.

This paper describes that by introducing directed graph and declarative programming techniques, we can simulate emotion-driven decision making. This approach has several advantages. One important one is we can introduce emotion into quality function deployment. Quality is not just function requirements. Quality is nothing other than customers’ expectations. And emotions are closely related to their expectations and satisfactions. Another advantage is that as engineering needs more and more diverse pieces of knowledge, it is becoming increasingly difficult to see the whole picture. But this approach permits decision making in your own way and still meeting customers’ expectations adequately.

Commentary by Dr. Valentin Fuster
2012;():627-635. doi:10.1115/DETC2012-70859.

In this paper, a style design system in which the conditions for Class A Bézier curves are applied is presented to embody designer’s intention by aesthetically high-quality shapes. Here, the term “Class A” means a high-quality shape that has monotone curvature and torsion, and the recent industrial design requires not only aesthetically pleasing aspect but also such high-quality shapes. Conventional design tools such as normal Bézier curves can represent any shapes in a modeling system; however, the system only provides a modeling framework, it does not necessarily guarantee high-quality shapes. Actually, designers do a cumbersome manipulation of many control points during the styling process to represent outline curves and feature curves; this hardship prevents designers from doing efficient and creative styling activities. Therefore, we developed a style design system to support a designer’s task by utilizing the Class A conditions of Bézier curves with monotone curvature and torsion.

Topics: Design , Shapes
Commentary by Dr. Valentin Fuster
2012;():637-644. doi:10.1115/DETC2012-71001.

In recent years, design is becoming increasingly important due to advancement of product function, which makes it increasingly difficult to differentiate product quality. Coupled with the growing demands for shorter product development lead time, systems which can create design plans effectively at the initial stage of design are indispensable. The objective of this study is to develop a system which can create designs with natural impression by quantifying natural phenomenon, mainly pattern designs like polka dots and leaf designs. The authors constructed the system using a neural network simulating the structure of the human brain and genetic algorithms simulating the heredity of living things. The effectiveness of the system was validated by using it to create design examples.

Topics: Design
Commentary by Dr. Valentin Fuster
2012;():645-652. doi:10.1115/DETC2012-71110.

A human activity interacting with products or with services involves the steps of perception, judgment and action. Affordances provided by products or services induce human activities. A model of interaction with products and services will be presented with examples of affordances and affordance features for products and services.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Engineering Applications of Brain Science and Human Models

2012;():653-657. doi:10.1115/DETC2012-70171.

A general optimization formulation for walk-to-run transition prediction using 3D skeletal model is presented. The walk-to-run transition is used to connect fast walking to slow running by using a step-to-step transition formulation. Walk-to-run transition includes four phases: double support walking phase, single support swinging phase, running phase, and finally single support running phase. The transition task is formulated as an optimization problem in which the dynamic effort is minimized subject to basic physical constraints. The joint torques and ground reaction forces (GRF) are recovered and analyzed from the simulation. The optimal solution of transition simulation is obtained in a few minutes by using predictive dynamics method.

Topics: Simulation
Commentary by Dr. Valentin Fuster
2012;():659-662. doi:10.1115/DETC2012-70172.

Human carrying is simulated in this work by using a skeletal digital human model with 55 degrees of freedom (DOFs). Predictive dynamics approach is used to predict the carrying motion with symmetric and asymmetric loads. In this process, the model predicts joints dynamics using optimization schemes and task-based physical constraints. The results indicated that the model can realistically match human motion and ground reaction forces data during symmetric and asymmetric load carrying task. With such prediction capability the model could be used for biomedical and ergonomic studies.

Topics: Simulation , Stress
Commentary by Dr. Valentin Fuster
2012;():663-671. doi:10.1115/DETC2012-70365.

This paper presents a computer-aided environment to analyze postures and movements in order to ergonomically validate the design of potentially any device a man or woman may have to deal with. The proposed environment integrates virtual prototyping techniques with Digital Human Modeling and Motion Capture techniques to determine fatigue, stress and risk for workers’ health. We considered a vertical refrigerated display unit as case study to analyze the interaction of supermarket staff filling the shelves with goods with the main goal determining the suitability of operators’ working condition and, eventually, providing a feedback to the design step.

The paper, after a brief description of the state of the art of the Motion Capture system and Digital Human Modeling, presents the architecture of the integrated environment developed and the working paradigm. At last preliminary results of the experimentation as well benefits and the limits of the outcomes achieved so far in the automation of ergonomics in machines design are presented.

Topics: Design
Commentary by Dr. Valentin Fuster
2012;():673-680. doi:10.1115/DETC2012-70667.

The stochastic Navier-Stokes equation solves the mysteries underlying the macroscopic morphogenetic processes of human beings, which include brain, legs, arms, internal organs. (Naitoh, 2001, 2008, 2010, 2011) This is possible because main part of the living beings is filled with water flow. The theoretical studies (Naitoh, 2008, 2010, 2011) also explain the reason why inner organs such as heart and liver are left-right asymmetric at the later stage of the developmental process. Our computational results (Naitoh and Kawanobe, 2011) also reveal the morphogenetic process of main blood vessels. Here, first, we examine the morphogenetic processes of bones and main nerve systems. Next, further thought experiments based on statistic fluid mechanics (Naitoh, 2008, 2011) and biological data reveal the standard network pattern common to neural networks inside the brain and various bio-molecular networks for the morphogenetic process of the human system. It is stressed that the standard network is basically constructed with only six groups of molecules and neurons. The physical equation describing the dynamics of the standard pattern shows the temporal oscillations appearing in various phenomena such as differentiation and proliferation in cell divisions and memory systems. Third, statistic fluid dynamics based on the quasi-stability principle and the indeterminacy principle (Naitoh, 2001, 2008, 2010, 2011) reveals the inevitability of the supermagic numbers including symmetric ratio of 1:1 and asymmetric ratios such as about 1.27:1, 1.35:1, 1.41:1, 1.44:1, 1.78:1, 2.1:1, 2.5:1, 3.6:1, and 4.54:1, in various pairs of flexible-particles in natural and living systems. The present report reveals the role of the magic numbers inside the human brain system, which are related to music, fine-arts, poem, and language.

Thus, the fluid-dynamics proposed in our previous reports will bring a new insight on the spatiotemporal structure of ontogeny and also a new technology of cyberneology, which will result in artificial brain system living over millennium in super-computers.

Commentary by Dr. Valentin Fuster
2012;():681-687. doi:10.1115/DETC2012-70816.

With the increasing information-intensiveness of products, users are challenged with expanding options and possible ways to interact. Rapidly escalating numbers of possible user-operation sequences hinder designers in anticipating all possible (unacceptable) outcomes. Interactively simulating product models with human subjects to explore all options is not practicable. Virtual simulation with computer models of users can open the way towards faster-than-real-time performance and investigation of massive numbers of interaction sequences. This paper reports on opportunities to improve realism of virtual-use simulations by incorporating knowledge about the workings of the human brain We elaborate how, in particular, cognitive-architecture simulations developed by cognitive scientists and error phenotypes identified in human reliability analysis (HRA) can extend a virtual-use simulation approach that we have proposed in foregoing work, by offering the prospective of generating interaction sequences with erroneous user actions unforeseen by the designer. We outline how such an integrated system can be implemented and also discuss validation issues.

Commentary by Dr. Valentin Fuster
2012;():689-694. doi:10.1115/DETC2012-70868.

Recent brain studies revealed brain and body cannot be separated. Further it revealed blood and muscles play an important role in our information processing. Bike riding is known as a typical example of tacit knowledge. Although there are efforts on how we can change such tacit or somatic/embodied knowledge of ours as this example into explicit one, we have been not so successful. From our past two series of experiments about detection of emotion from face and about calligraphy, we learned acceleration plays a crucial role. This paper attempts to represent somatic/knowledge representation as patterns of position and acceleration. This is still a preliminary study but it may lead us to another way of representing our tacit knowledge and thus we may develop another way of transferring tacit knowledge such as skills, bike riding, etc in the form of patterns of position and acceleration. Mechanical engineering is a tangible engineering. Therefore the author would like to emphasize the importance of exploring how we can represent our somatic/embodied knowledge. This is a very much preliminary step toward that goal.

Commentary by Dr. Valentin Fuster
2012;():695-701. doi:10.1115/DETC2012-71068.

Research in brain-computer interfaces have focused primarily on motor imagery tasks such as those involving movement of a cursor or other objects on a computer screen. In such applications, it is important to detect when the user is interested in moving an object and when the user is not active in this task. This paper evaluates the steady state visual evoked potential (SSVEP) as a feedback mechanism to confirm the mental state of the user during motor imagery. These potentials are evoked when a subject looks at a flashing objects of interest. Four different experiments are conducted in this paper. Subjects are asked to imagine the movement of flashing object in a given direction. If the subject is involved in this task, the SSVEP signal will be detectable in the visual cortex and therefore the motor imagery task is confirmed. During the experiment, EEG signal is recorded at 4 locations near visual cortex. Using a weighting scheme, the best combination of the recorded signal is selected to evaluate the presence of flashing frequency. The experimental result shows that the SSVEP can be detected even in complex motor imagery of flickering objects. The detection rate of 85% is achieved while the refreshing time for SSVEP feedback is set to 0.5 seconds.

Topics: Machinery , Brain , Feedback
Commentary by Dr. Valentin Fuster
2012;():703-708. doi:10.1115/DETC2012-71273.

During neural activity in the brain, humans transmit and process information and decide upon actions or responses. When neural activity occurs, blood flow and blood quantity increase in the tissue near the active neurons, and the ratio of oxygenated to deoxygenated hemoglobin in the blood changes. In this paper, we used near-infrared spectroscopy (NIRS) to determine the state of hemoglobin oxygenation at the cerebral surface and on that basis performed real-time color mapping of brain activity (the brain activation response) in the target regions. In this paper, we describe measurements of brain activation using NIRS so as to clarify any differences between conscious and unconscious movement. Bio-locomotion is divided into voluntary movements, which are made voluntarily and consciously, and passive movements, which are made passively and unconsciously. Accordingly, in this paper we investigate the brain activation associated with these two types of movements. The subject successively moves his/her lower legs through knee bends. We measure the brain activities while the subject, who is sitting on a chair moves back and forth. In addition, we carry out an experiment on the effects of the existence or nonexistence of movement caused by vibration on brain activities to consider the results.

Commentary by Dr. Valentin Fuster
2012;():709-714. doi:10.1115/DETC2012-71291.

The goal of this paper is to reconstruct three primitive shapes — rectangular cube, cone and cylinder — by analyzing electrical signals which are emitted by the brain. Three participants are asked to visualize these shapes. During visualization, a 14-channel neuroheadset is used to record electroencephalogram (EEG) signals along the scalp. The EEG recordings are then averaged to increase the signal to noise ratio which is referred to as an event related potential (ERP). Every possible subsequence of each ERP signal is analyzed in an attempt to determine a time series which is maximally representative of a particular class. These time series are referred to as shapelets and form the basis of our classification scheme. After implementing a voting technique for classification, an average classification accuracy of 60% is achieved. Compared to naive classification rate of 33%, we determine that the shapelets are in fact capturing features that are unique in the ERP representation of a unique class.

Commentary by Dr. Valentin Fuster
2012;():715-725. doi:10.1115/DETC2012-71481.

Requirements are an essential element to engineering design as they are used to focus idea generation during conceptual design, provide criteria for decision making during concept selection, and verify the chosen concept fulfills product needs. Because they are essential to the entire design process, emphasis must be placed on ensuring that they are correct. This research focuses on a value-based methodology useful for challenging and validating established requirements. A case study was conducted on an industry-sponsored project to use this value-based process on the requirements that constrain the design of an automotive seat. A human anthropomorphic model, comfort value model, occupant safety model, and a model of an automotive seat are integrated to establish an H-point travel window to maximize the safety and comfort of an automotive seating structure. This case study shows that this approach provides evidence to establish requirements based on value to the human rather than legacy seating requirements.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Geometric Techniques in Modeling and Simulating Machining Processes

2012;():727-735. doi:10.1115/DETC2012-70278.

A new method of decomposing 3D solid models for use in automated manufacturing process planning applications is presented in this paper. The resulting algorithm, which is implemented and interacts with a larger suite of planning tools, is based on dividing complex volumes into convex sub-volumes. In order to ensure that the resulting convex sub-volumes are compact and feasible for machining operations, a set of heuristics are developed to categorize, order, and determine splitting directions of the concave edges. Each of the resulting convex sub-volumes represents material that is to be removed from the bounding-box to create the intended work-piece. The set of sub-volumes represents a compound solid (i.e. decomposed solid) that is converted into graph of boundary representation (B-rep) primitives and is reasoned about in a subsequent search process. The algorithm has been implemented and tested on a variety of solid models resembling real automotive parts, and results show the effectiveness and efficiency of the algorithm.

Commentary by Dr. Valentin Fuster
2012;():737-750. doi:10.1115/DETC2012-70549.

Advances in computer technology have made possible the integration of complex geometric and process modeling capabilities for use in engineering design and process planning. This is evident in the area of machining where it is now possible to integrate the physics of the machining process with changes that are taking place to the geometry of a work piece during the execution of complex operations. This capability is referred to as Virtual Machining (VM). Geometric modeling capabilities include the ability to generate complex swept volumes created during execution of tool path moves, to subtract these from a dynamically changing in-process work piece model, and to extract the instantaneous cutter/workpiece engagement as the tool moves in the feed direction. Process modeling includes the use of this engagement geometry to calculate cutting forces, deflection of structures, vibrations and to use these in process optimization.

This paper reviews advances in the first part of this tandem, the geometric modeling methods and techniques that make Virtual Machining possible. It further highlights important directions that can be taken to further advances in this field.

Commentary by Dr. Valentin Fuster
2012;():751-760. doi:10.1115/DETC2012-70550.

Capturing the in-process workpiece geometry generated during machining is an important part of tool path verification and increasingly the physics-based simulation of cutting forces used in Virtual Machining. Swept volume generation is a key supporting methodology that is necessary for generating these in-process states. Hole milling is representative of one class of milling operation where the swept volume is continuously intersecting. Due to this it is impossible to decompose the tool path into non-intersecting regions which is typically the approach used in solid model based swept volume generation. In this paper an approach to generating NURBS based solid models for self-intersecting swept volumes generated during hole milling is presented. NURB surfaces are generated that compactly represent the surfaces of the swept volume. This utilizes the geometry of the helical curve as opposed to a linearly interpolated tool path that is used for more generic approaches to generating swept volumes. Examples applying the approach to various types of cutter geometries used in milling are presented.

Commentary by Dr. Valentin Fuster
2012;():761-770. doi:10.1115/DETC2012-70603.

A compact representation for the quantitative description of foot shape is important not only for the foot measurement and anthropometry, but also for the ergonomic design of footwear. Based on foot scanned data, a novel point-structured geometric modeling approach to the reduction of 3D point cloud and the preservation of shape information is proposed. The semantic descriptions of foot features are interpreted into logical definitions. A total of fifteen feature points are thus defined. Finally, there are only a total of 2,093 data points needed in such a point-structured representation. Based on it, it is easy to fetch not only the 1D and 2D measurements, but also the 3D feature curves of the foot shape. It can provide a compact 3D geometric model to serve as a significant database for the individuals and, thereby, becomes a useful tool in investigating the foot and manufacturing the foot related apparel and devices.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: High Performance Computing

2012;():771-778. doi:10.1115/DETC2012-70083.

The Air Force Research Laboratory Information Directorate Advanced Computing Division (AFRL/RIT) High Performance Computing Affiliated Resource Center (HPC-ARC) is the host to a very large scale interactive computing cluster consisting of about 1800 nodes. Condor, the largest interactive Cell cluster in the world, consists of integrated heterogeneous processors of IBM Cell Broadband Engine (Cell BE) multicore CPUs, NVIDIA General Purpose Graphic Processing Units (GPGPUs) and Intel x86 server nodes in a 10Gb Ethernet Star Hub network and 20Gb/s Infiniband Mesh, with a combined capability of 500 trillion floating operations per second (TFLOPS). Applications developed and running on CONDOR include large-scale computational intelligence models, video synthetic aperture radar (SAR) back-projection, Space Situational Awareness (SSA), video target tracking, linear algebra and others. This presentation will discuss the design and integration of the system. It will also show progress on performance optimization efforts and lessons learned on algorithm scalability on a heterogeneous architecture.

Commentary by Dr. Valentin Fuster
2012;():779-784. doi:10.1115/DETC2012-70256.

The use of unsecure foundries has allowed and is still providing a pathway for counterfeit microelectronics into U.S. defense systems. As a result, the Warfighter has been put at risk and a solution is needed. To counter this dilemma, this study looks into the feasibility of creating a Department of Defense (DoD) - wide design cloud that would provide circuit designers with a more secure and economical way of designing and fabricating circuits. The design cloud would include secure communication to trusted foundries along with needed circuit design software. Factors such as security, costs, benefits, and issues are taken into consideration in determining whether the use of the cloud would actually aid the integrated circuit design process.

Commentary by Dr. Valentin Fuster
2012;():785-791. doi:10.1115/DETC2012-70281.

We propose here a subspace augmented Rayleigh-Ritz conjugate gradient method (SaRCG) for solving large-scale eigen-value problems. The method is highly scalable and well suited for multi-core architectures since it only requires sparse matrix-vector multiplications (SpMV).

As a specific application, we consider the modal analysis of geometrically complex structures that are discretized via non-conforming voxels. The voxelization process is robust and relatively insensitive to geometric complexity, but it leads to large eigen-value problems, that are difficult to solve via standard eigen-solvers such as block-Lanczos.

Such problems are easily solved via the proposed SaRCG, where one can, in addition, exploit the voxelization structure to render the SpMV assembly-free. As the numerical experiments indicate, the resulting implementation on multi-core CPUs, and graphics-programmable-units is a practical solution to automated eigen-value estimation during early stages of design.

Topics: Architecture
Commentary by Dr. Valentin Fuster
2012;():793-799. doi:10.1115/DETC2012-70818.

We have developed a prototype of web based CAE system with HTML5 and WebGL technology and the ADVENTURE system. The main system including solver part is operated on the remote server like a cloud computing and user interface client is operated in a web browser on the Internet devices (including PCs, smartphones, and tablet devices) without any additional software or plug-ins.

In this system, we use the ADVENTURE system as server-side CAE system. The ADVENTURE system is a general-purpose computational analysis system we have been developing. Aim of the system is to enable to analyze a three dimensional finite element model of arbitrary shape over 100 million Degrees Of Freedom (DOF) mesh on massive parallel computers like a large-scale PC cluster or a super computer. To solve large scale problem, domain-decomposition-based parallel algorithms are implemented in pre-processes (domain decomposition), main processes (system matrix assembling and solutions) and post-process (visualization) of the ADVENTURE system, respectively. The hierarchical domain decomposition method (HDDM) with the balancing domain decomposition (BDD) as a pre-conditioned iterative solver is adopted in the main processes. Module-based architecture of the system with standardized I/O format and libraries are also developed and employed to attain flexibility, portability, extensibility and maintainability of the whole system.

Since the software has become a quite large system, it is not easy to install or operate the system on parallel machines by users. To solve this issue, the authors have been developed CAE system for large scale problems based on computer network. With this system, the main system including solver part is operated on the remote server like a cloud computing and to operate such a system from a client PC through the network. Users need not to touch server system and easy operation on client system was developed. But this system requires for users to install client software, which may become a barrier to start using the system.

Commentary by Dr. Valentin Fuster
2012;():801-805. doi:10.1115/DETC2012-71121.

This paper describes the software infrastructure needed to enable massive multi-body simulation using multiple GPUs. Utilizing a domain decomposition approach, a large system made up of billions of bodies can be split into self-contained subdomains which are then transferred to different GPUs and solved in parallel. Parallelism is enabled on multiple levels, first on the CPU through OpenMP and secondly on the GPU through NVIDIA CUDA (Compute Unified Device Architecture). This heterogeneous software infrastructure can be extended to networks of computers using MPI (Message Passing Interface) as each subdomain is self-contained. This paper will discuss the implementation of the spatial subdivision algorithm used for subdomain creation along with the algorithms used for collision detection and constraint solution.

Commentary by Dr. Valentin Fuster
2012;():807-817. doi:10.1115/DETC2012-71236.

Laser beams can be used to create optical traps that can hold and transport small particles. Optical trapping has been used in a number of applications ranging from prototyping at the microscale to biological cell manipulation. Successfully using optical tweezers requires predicting optical forces on the particle being trapped and transported. Reasonably accurate theory and computational models exist for predicting optical forces on a single particle in the close vicinity of a Gaussian laser beam. However, in practice the workspace includes multiple particles that are manipulated using individual optical traps. It has been experimentally shown that the presence of a particle can cast a shadow on a nearby particle and hence affect the optical forces acting on it. Computing optical forces in the presence of shadows in real-time is not feasible on CPUs. In this paper, we introduce a ray-tracing-based application optimized for GPUs to calculate forces exerted by the laser beams on microparticle ensembles in an optical tweezers system. When evaluating the force exerted by a laser beam on 32 interacting particles, our GPU-based application is able to get a 66-fold speed up compared to a single core CPU implementation of traditional Ashkin’s approach and a 10-fold speedup over its single core CPU-based counterpart.

Commentary by Dr. Valentin Fuster
2012;():819-829. doi:10.1115/DETC2012-71267.

Silicon anisotropic etching simulation, based on geometric model or cellular automata (CA) model, is highly time-consuming. In this paper, we propose two parallelization methods for the simulation of the silicon anisotropic etching process with CA models on graphics processing units (GPUs). One is the direct parallelization of the serial CA algorithm, and the other is to use a spatial parallelization strategy where each crystal unit cell is allocated to a thread in GPU. The proposed simulation methods are implemented with the Compute Unified Device Architecture (CUDA) application programming interface. Several computational experiments are taken to analyze the efficiency of the methods.

Commentary by Dr. Valentin Fuster
2012;():831-838. doi:10.1115/DETC2012-71315.

The current work promotes the implementation of the Smoothed Particle Hydrodynamics (SPH) method for the Fluid-Solid Interaction (FSI) problems on three levels: 1- an algorithm is described to simulate FSI problems, 2- a parallel GPU implementation is described to efficiently alleviate the performance problem of the SPH method, and 3- validations against other numerical methods and experimental results are presented to demonstrate the accuracy of SPH and SPH-based FSI simulations. While the numerical solution of the fluid dynamics is performed via SPH method, the general Newton-Euler equations of motion are solved for the time evolution of the rigid bodies. Moreover, the frictional contacts in the solid phase are resolved by the Discrete Element Method (DEM), which draws on a viscoelastic model for the mutual interactions. SPH is a Lagrangian method and allows an efficient and straightforward coupling of the fluid and solid phases, where any interface, including boundaries, can be decomposed by SPH particles. Therefore, with a single SPH algorithm, fluid flow and interfacial interactions, namely force and motion, are considered. Furthermore, without any extra effort, the contact resolution of rigid bodies with complex geometries benefits from the spherical decomposition of solid surfaces. Although SPH provides 2nd order accuracy in the discretization of mass and momentum equations, the pressure field may still exhibit large oscillations. One of the most straightforward and computationally inexpensive solutions to this problem is the density re-initialization technique. Additionally, to prevent particle interpenetration and improve the incompressibility of the flow field, the XSPH correction is adopted herein. Despite being relatively straightforward to implement for the analysis of both internal and free surface flows, a naïve SPH simulation does not exhibit the efficiency required for the 3D simulation of real-life fluid flow problems. To address this issue, the software implementation of the proposed framework relies on parallel implementation of the spatial subdivision method on the Graphics Processing Unit (GPU), which allows for an efficient 3D simulation of the fluid flow. Similarly, the time evolution and contact resolution of rigid bodies are implemented using independent GPU-based kernels, which results in an embarrassingly parallel algorithm. Three problems are considered in the current work to show the accuracy of SPH and FSI algorithms. In the first problem, the simulation of the transient Poiseuille flow exhibits an exact match with the analytical solution in series form. The lateral migration of the neutrally buoyant circular cylinder, referred to as tubular pinch effect, is successfully captured in the second problem. In the third problem, the migration of spherical particles in pipe flow was simulated. Two tests were performed to demonstrate whether the Magnus effect or the curvature of the velocity profile cause the particle migration. At the end, the original experiment of the Segre and Silberberg (Segre and Silberberg, Nature 189 (1961) 209–210), which is composed of 3D fluid flow and several rigid particles, is simulated.

Commentary by Dr. Valentin Fuster
2012;():839-846. doi:10.1115/DETC2012-71352.

The Absolute Nodal Coordinate Formulation (ANCF) has been widely used to carry out the dynamics analysis of flexible bodies that undergo large rotation and large deformation. This formulation is consistent with the nonlinear theory of continuum mechanics and is computationally more efficient compared to other nonlinear finite element formulations. Kinematic constraints that represent mechanical joints and specified motion trajectories can be introduced to make complex flexible mechanisms. As the complexity of a mechanism increases, the system of differential algebraic equations becomes very large and results in a computational bottleneck. This contribution helps alleviate this bottleneck using three tools: (1) an implicit time-stepping algorithm, (2) fine-grained parallel processing on the Graphics Processing Unit (GPU), and (3) enabling parallelism through a novel Constraint-Based Mesh (CBM) approach. The combination of these tools results in a fast solution process that scales linearly for large numbers of elements, allowing meaningful engineering problems to be solved.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Inverse Problems in Science and Engineering

2012;():847-852. doi:10.1115/DETC2012-70025.

In order to better understand the mechanical properties of biological cells, characterization and investigation of their material behavior is necessary. In this paper hyperelastic Neo-Hookean material is used to characterize the mechanical properties of mouse oocyte cell. It has been assumed that the cell behavior is continues, isotropic, nonlinear and homogenous material. Then, by matching the experimental data with finite element (FE) simulation result and using the Levenberg–Marquardt optimization algorithm, the nonlinear hyperelastic model parameters have been extracted. Experimental data of mouse oocyte captured from literatures. Advantage of the developed model is that it can be used to calculate accurate reaction force on surgical instrument or it can be used to compute deformation or force in virtual reality based medical simulations.

Commentary by Dr. Valentin Fuster
2012;():853-860. doi:10.1115/DETC2012-70194.

This paper presents a numerical analysis method for shape optimization in order to achieve stiffness maximization in thermoelastic fields. Mean compliance is used as an objective functional for the shape optimization problem. The mean compliance minimization problem in the thermoelastic fields is formulated on volume constraint condition. The shape gradient of the shape optimization problems is derived theoretically using the adjoint variable method, the Lagrange multiplier method and the formulae of the material derivative. Reshaping is accomplished using a traction method that was proposed as a solution to shape optimization problems. In addition, a new numerical procedure for the shape optimization is proposed. The validity of the proposed method is confirmed based on the results of 2D numerical analysis.

Commentary by Dr. Valentin Fuster
2012;():861-872. doi:10.1115/DETC2012-70343.

A natural approach to the inverse coupled nonlinear lumped parameter problem is to use a continuous approximation of the data and its derivatives using Bezier functions. The method is robust and does not need regularization or other transformation to convert the inverse problem to a well-posed one. The Bezier function technique first identifies continuous Bezier functions to represent the measured data using a recursive Bezier filter. These Bezier functions also provide information on the derivatives of the functions over the range of the independent variable. This is used in two ways. First, the residuals of the differential equations now depend on the distributed parameters alone. Second, the boundary conditions for a reduced range of the independent variable are easily generated. The lumped parameters are determined through two applications of standard unconstrained optimization. The first application identifies the parameters by minimizing the sum of the least squared error in the residuals over the range of the data. This also provides a robust initial guess for the second optimization. In the second optimization, the parameters are determined once again using a collocation technique to integrate the solution and reduce the sum of the absolute error with respect to the smooth representation of the data. Fluid flow in a long vertical channel with fluid injection is used to illustrate the procedure. The measured data is simulated using random perturbations about the smooth solution to the forward problem. The results show that the Bezier function technique can identify a solution even with severe perturbations in the data. It is also shown that using the information from a clipped region can lead to a well-posed problem.

Commentary by Dr. Valentin Fuster
2012;():873-882. doi:10.1115/DETC2012-70584.

Present paper proposes new dynamic-biorthogonality based Bayesian formulation for calibration of computer simulators with parametric uncertainty. The formulation uses decomposition of solution field into mean and random field. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spacial dimensions. Both the dimensions are spectrally represented using respective orthogonal bases. In particular, present paper investigates polynomial chaos basis for stochastic dimension and eigenfunction basis for spacial dimension. Dynamic evolution equations are derived such that basis in stochastic dimension is retained while basis in spacial dimension is changed such that dynamic orthogonality is maintained. Resultant evolution equations are used to propagate prior uncertainty in input parameters to the solution output. Whenever new information is available through experimental observations or expert opinion, Bayes theorem is used to update the basis in stochastic dimension. Efficacy of the proposed methodology is demonstrated for calibration of 2D transient diffusion equation with uncertainty in source location. Computational efficiency of the method is demonstrated against Generalized Polynomial Chaos and Monte Carlo method.

Topics: Calibration
Commentary by Dr. Valentin Fuster
2012;():883-891. doi:10.1115/DETC2012-71050.

The present paper describes a methodology for the inverse identification of the complete set of parameters associated with the Weirstrass-Mandelbrot (W-M) function that can describe any fractal scalar field distribution of measured data defined within a volume. Our effort is motivated by the need to be able to describe a scalar field quantity distribution in a volume in order to be able to represent analytically various non-homogeneous material properties distributions for engineering and science applications. Our method involves utilizing a refactoring of the W-M function that permits defining the characterization problem as a high dimensional singular value decomposition problem for the determination of the so-called phases of the function. Coupled with this process is a second level exhaustive search that enables the determination of the density of the frequencies involved in defining the trigonometric functions involved in the definition of the W-M function. Numerical applications of the proposed method on both synthetic and actual volume data, validate the efficiency and the accuracy of the proposed approach. This approach constitutes a radical departure from the traditional fractal dimension characterization studies and opens the road for a very large number of applications and generalizes the approach developed by the authors for fractal surfaces to that of fractal volumes.

Topics: Fractals
Commentary by Dr. Valentin Fuster
2012;():893-903. doi:10.1115/DETC2012-71088.

This paper presents a methodology for identifying defects by multiple sensors under the presence of both sensor and defect uncertainties. This methodology introduces a representation of the beliefs of both the locations of defects and the sensors each by a probability density function and updates them using the extended Kalman filter. Since the beliefs are recursively maintained while the sensor is moving and the associated observation data are updated, the proposed methodology considers not only the current observation data but also the prior knowledge, the past observation data and beliefs, which include both sensor and defect uncertainties. The concept of differential entropy also has been introduced and is utilized as a performance measure to evaluate the result of defect identification and handle the identification of multiple defects. The verification and evaluation of the proposed methodology performance were conducted via parametric numerical studies. The results have shown the successful identification of defects with reduced uncertainty when the number of measurements increases, even under the presence of large sensor uncertainties. Furthermore, the proposed methodology was applied to the more realistic problem of identifying multiple defects located on a specimen and have demonstrated its applicability to practical defect identification problems.

Topics: Sensors , Uncertainty
Commentary by Dr. Valentin Fuster
2012;():905-915. doi:10.1115/DETC2012-71119.

This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval Research Laboratory, that consists of multiple sets of hexapod configurations essentially forming a recursive synthesis of multiple parallel mechanisms. Hexapod linkages, the grips, and other deformable parts of the machine absorb energy. This is manifested in an either reversible or irreversible manner, thus introducing a parasitic behavior that is undesirable from the perspective of our ultimate goal of the material constitutive characterization. The data reduction focuses both on the kinematic (pose of the grip) and the reaction (forces and moments) that are critical input quantities of the material characterization process. The kinematic response is reduced by exploitation of the kinematics of the dots used for full field measurements. A correction transformation is identified by solving an inverse problem that minimizes the known displacements at the grips as given by the full field measurements and those given by the machine’s displacement sensors. A Procrustes problem formalism was introduced to exploit a known material behavior tested by the testing machine. Consequently, a correction transformation was established and was applied on the load cell data of the machine in order to eliminate the spurious responses appearing in the force and moment data.

Commentary by Dr. Valentin Fuster
2012;():917-925. doi:10.1115/DETC2012-71245.

This paper presents an inverse approach for estimating time varying loads acting on a structure from experimental strain measurements using model reduction. The strain response of an elastic vibrating system is written as a linear superposition of strain modes. Since the strain modes as well as the normal displacement modes are intrinsic dynamic characteristics of a system, the dynamic loads exciting a structure are estimated by measuring induced strain fields. The accuracy of estimated loads is dependent on the placement of gages on the instrumented structure and the number of retained strain modes from strain modal analysis. A solution procedure based on construction of D-optimal design is implemented to determine the optimum locations and orientations of strain gages. An efficient approach is proposed which makes use of model reduction technique, resulting in significant improvement in the dynamic load estimation. Validation of the proposed approach through a numerical example problem is also presented.

Topics: Stress , Design , Transducers
Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Knowledge-Capture, Reuse, and Management

2012;():927-935. doi:10.1115/DETC2012-70240.

This paper proposes a cross-disciplinary methodology for a fundamental question in product development: How can the innovation patterns during the evolution of an engineering system (ES) be encapsulated, so that it can later be mined through data analysis methods? Reverse engineering answers the question of which components a developed engineering system consists of, and how the components interact to make the working product. TRIZ answers the question of which problem-solving principles can be, or have been employed in developing that system, in comparison to its earlier versions, or with respect to similar systems. While these two methodologies have been very popular, to the best of our knowledge, there does not yet exist a methodology that reverse-engineers, encapsulates and represents the information regarding the application of TRIZ through the complete product development process. This paper suggests such a methodology, that consists of mathematical formalism, graph visualization, and database representation. The proposed approach is demonstrated by analyzing the design and development process for a prototype wrist-rehabilitation robot and representing the process as a graph that consists of TRIZ principles.

Commentary by Dr. Valentin Fuster
2012;():937-945. doi:10.1115/DETC2012-70754.

In product design, passing undetected errors to the downstream can cause error avalanche, could diminish product acceptance and largely increase the overall cost. Yet, it is difficult for designers to collect all the related potential errors from different departments in the initial design phase. In order to deal with these problems, this paper puts forward an ontology based method to integrate related history error data from different data sources of multiple departments in an enterprise. By using the advantages of ontologies and ontology-based information systems in knowledge management and semantic reasoning, the method enables the investigation of the root cause of the related potential malfunctions in the early product design phase. The framework can provide warnings and root causes of related potential errors in design based on history data and further continuously improve the product design. In this manner, this method is expected to reduce the knowledge limitation of designers in the initial design phase, help designers consider the problems in the whole enterprise and the product life cycle more completely, facilitate design improvement more accurately and efficiently, and further reduce the cost of the overall product life cycle.

Commentary by Dr. Valentin Fuster
2012;():947-958. doi:10.1115/DETC2012-71189.

Capturing engineering domain knowledge is a challenging problem due to several factors such as the complexity of the knowledge as well as the multiplicity of the representation techniques with varying underlying syntax and semantics. The existing approaches for knowledge capture and organization in engineering are typically ad hoc in nature, which inhibit efficient knowledge sharing and deployment. The objective of this paper is to introduce a systematic methodology for knowledge capture, organization, and modeling in the manufacturing domain. The proposed methodology focuses on the conceptualization phase and utilizes Simple Knowledge Organization System (SKOS) early in the process for creating a controlled vocabulary, or thesaurus, in the domain of interest. The resulting SKOS-based thesaurus is a lightweight ontology that helps identify the core concepts that will be translated into formal classes in an axiomatic ontology such as MSDL. The proposed methodology is discussed with the aid of examples from metal casting domain. Also, use of Semantic Web Rule Language (SWRL) for representation of constraint knowledge is discussed in this paper.

Topics: Modeling
Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Material Characterization

2012;():959-966. doi:10.1115/DETC2012-70681.

This paper presents a new method for the correction of the lens distortion for full-field measurement using dot centroid method. Displacement and strain can be obtained from full-field digital images, which include centroids represented by many dots on the specimen. The captured images are distorted by an optical system. Several ideas to remove such a distortion are developed, however the image is corrected by an interpolation for brightness intensity level. In short it is hard to achieve high accuracy for strain measurement by the interpolation, because high accuracy needs sub-pixel effect for computing a centroid.

In this paper, the dot centroid method is employed and the centroids are directly corrected by the proposed method, which directly compute the only positions of centroids. Due to the correction, 0.1% strain accuracy is accomplished and total computation time is reduced. Effectiveness of the idea is shown through numerical examples and the result of strain field measurement is presented and discussed.

Commentary by Dr. Valentin Fuster
2012;():967-972. doi:10.1115/DETC2012-70704.

In recent years several techniques of full-field measurement have been studied by digital image correlation method, moiré interference method and holographic interferometry method and so on. Image based method can be easily applied to large deformation problem and moving specimen at slow speed. Because digital camera capabilities, which are high resolution, low noise and faster data transfer speed, have been improved, very small strain measurement can be achieved by those improvements. The improvement will widen those applications, for example, moving object at high speed and less 0.1% strain measurement which is almost the same accuracy with a precise strain gauge. In order to apply the advanced application, noise reduction for a digital image and lens distortion correction for an optical system should be developed. In this paper we propose noise reduction technique using statistical camera model to be applied to any kinds of digital cameras.

Commentary by Dr. Valentin Fuster
2012;():973-979. doi:10.1115/DETC2012-70909.

Fracture mechanics analysis using the finite element method has been one of the key methodologies to evaluate structural integrity for aging infrastructures such as aircraft, ship, power plants, etc. However, three-dimensional crack analyses for structures with highly complex three-dimensional shapes have not widely been used, because of many technical difficulties such as the lack of enough computational power.

The authors have been developing a fracture mechanics analysis system that can deal with arbitrary shaped cracks in three-dimensional structures. The system consists of mesh generation software, a finite element analysis program and a fracture mechanics module. In our system, a Virtual Crack Closure-Integral Method (VCCM) for the quadratic tetrahedral finite elements is adopted to evaluate the stress intensity factors. This system can perform the three-dimensional fracture analyses. Fatigue and SCC crack propagation analyses with more than one cracks of arbitrary complicated shapes and orientations. The rate and direction of crack propagation are predicted by using appropriate formulae based on the stress intensity factors.

When the fracture mechanics analysis system is applied to the complex shaped aging structures with the cracks which are modeled explicitly, the size of finite element analysis tends to be very large. Therefore, a large scale parallel structural analysis code is required. We also have been developing an open-source CAE system, ADVENTURE. It is based on the hierarchical domain decomposition method (HDDM) with the balancing domain decomposition (BDD) pre-conditioner. A general-purpose parallel structural analysis solver, ADVENTURE_Solid is one of the solver modules of the ADVENTURE system.

In this paper, we combined VCCM for the tetrahedral finite element with ADVENTURE system and large-scale fracture analyses are fully automated. They are performed using the massively parallel super computer ES2 (Earth Simulator 2) which is owned and run by JAMSTEC (Japan Agency for Marine-Earth Science and Technology).

Commentary by Dr. Valentin Fuster
2012;():981-987. doi:10.1115/DETC2012-70969.

This paper presents formulations that enable the vision-based measurement of displacement and strain fields extensively in a probabilistic manner. The proposed formulations are built on the dot centroid tracking (DCT) method by digital cameras, which measures the darkness of each pixel in gray scale, identify dots marked on a specimen, derives dot centroids using pixel darkness information and derives displacement and strain fields by tracking the centroids and interpolating the nodal displacements and strains. Under the Gaussian assumption, the proposed formulations analytically propagate the standard deviation of uncertainty in darkness measurement and estimate that in the displacement and strain field measurement. As the first step, the formulations were completed for continuous field measurement with triangular elements. Most advantageously, the proposed formulations allow discussion on measurement error bounds, which also enables the quantitative comparison of the DCT method to the other measurement techniques. For numerical validation, standard deviations of nodal displacements and strains estimated from the known darkness uncertainty were compared to those derived from large samples created with the same darkness uncertainty. The results show the validity of the proposed formulations and their potential in measurement with reliability.

Commentary by Dr. Valentin Fuster
2012;():989-997. doi:10.1115/DETC2012-71007.

This paper presents a multi-linear modeling technique to characterize the nonlinear behavior of anisotropic materials. The nonlinear behavior of anisotropic materials is represented with a multi-linear stress-strain model under the assumption that the material is path-, rate- and temperature-independent. The multi-linear model is constructed by adopting the energy-based characterization, which equates the applied external work to the induced strain energy and inversely solves for the multi-linear model. A numerical technique based on Kalman filter stochastically identifies multi-linear coefficients and constructs the multi-linear model by modeling prior knowledge and empirical knowledge probabilistically. Since the coefficients are estimated with their associated variance, differential entropy can measure the certainty of all estimated coefficients as a single quantity. The validity of the proposed technique in estimating the multi-linear coefficients has been first demonstrated using pseudo-experimental data created by the finite element analysis. Further investigation of the effectiveness of the proposed technique under different measurement noises show its stable ability for nonlinear characterization of anisotropic materials.

Topics: Modeling
Commentary by Dr. Valentin Fuster
2012;():999-1009. doi:10.1115/DETC2012-71064.

In this paper we are reporting on the first successful campaign of systematic, automated and massive multiaxial tests for composite material constitutive characterization. The 6 degrees of freedom system developed at the Naval Research Laboratory (NRL) called NRL66.3, was used for this task. This was the in-augural run that served as the validation of the proposed overall constitutive characterization methodology. It involved accomplishing performing 1152 tests in 12 business days reaching a peak throughput of 212 tests per day. We describe the context of the effort in terms of the reasoning and the actual methods behind it. Finally, we present representative experimental data and associated constitutive characterization results for representative loading paths.

Commentary by Dr. Valentin Fuster
2012;():1011-1020. doi:10.1115/DETC2012-71082.

This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by the employment of energy based design optimization principles to solve the inverse problem that determines the unknown parameters of the constitutive model under consideration. In order to validate identified constitutive models, finite element simulations using these models were exercised for three distinct specimen geometries. The first geometry was that of the characterization coupon under multiaxial loading. The second was that of open hole specimens in tension. The final one was that of stiffened panel substructures under tension. Actual experimental data from testing all these specimens were collected by the use of load cells, full field displacement and strain methods and strain gauges. Finally, the theoretical predictions were compared with the experimental ones in terms of strain field distributions and load-strain responses. The comparisons demonstrated excellent predictability of the determined constitutive responses with the predictions always within the error band of the methods used to collect the experimental data.

Commentary by Dr. Valentin Fuster
2012;():1021-1031. doi:10.1115/DETC2012-71109.

Full field strain measurement techniques are based on computing the spatial derivatives of the approximation or interpolation of the underlying displacement fields extracted from digital imaging methods. These methods implicitly assume that the medium satisfies the compatibility conditions, which for any practical reason is only true in the case of a continuum body that remains continuum throughout the history of its mechanical loading. In the present work we introduce a method that can be used to calculate the strain components directly from typical digital imaging data, without the need of the continuum hypothesis and the need for displacement field differentiation. Thus it allows the imaging and measurement of strain fields from surfaces with discontinuities (i.e. small cracks). Numerical comparisons are performed based on synthetic data produced from an analytical solution for an open hole domain in tension. Mean absolute error distributions are calculated for the cases of both the traditional mesh free random grid method and the direct strain method introduced in the paper are given. It is established that the more refined representation of strain provided by this approach is more accurate everywhere in the domain, but most importantly, near the boundaries of the representation domain, where the error is higher for traditional methods.

Topics: Imaging
Commentary by Dr. Valentin Fuster
2012;():1033-1042. doi:10.1115/DETC2012-71173.

Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. While directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in the literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field (GRF) and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but the optimization process can be very computationally intensive, especially for problems with a large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy. A hybrid optimization approach of modeling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach.

Commentary by Dr. Valentin Fuster

32nd Computers and Information in Engineering Conference: Model-Based Design and Verification of Complex and Large-Scale Systems

2012;():1043-1054. doi:10.1115/DETC2012-70135.

Cyber Physical Systems couple computational and physical elements, therefore the behavior of geometry (deformations, kinematics), physics and controls needs to be certified using many different tools over a very high dimensional space. Because of the near infinite number of ways such a system can fail meeting its requirements, we developed a Probabilistic Certificate of Correctness (PCC) metric which quantifies the probability of satisfying requirements with consistent statistical confidence.

PCC can be implemented as a scalable engineering practice for certifying complex system behavior at every milestone in the product lifecycle. This is achieved by: creating virtual prototypes at different levels of model abstraction and fidelity; capturing and integrating these models into a simulation process flow; verifying requirements in parallel by deploying virtual prototypes across large organizations; reducing certification time proportional to additional computational resources and trading off sizing, modeling accuracy, technology and manufacturing tolerances against requirements and cost.

This process is an improvement over the V-cycle because verification and validation happens at every stage of the system engineering process thus reducing rework in the more expensive implementation and physical certification phase. The PCC process is illustrated using the example of “Safe Range” certification for an UAV with active flutter control.

Commentary by Dr. Valentin Fuster
2012;():1055-1064. doi:10.1115/DETC2012-70180.

For complex, safety-critical systems failures due to component faults and system interactions can be catastrophic. One aspect of ensuring a safe system design is the analysis of the impact and risk of potential faults early in the system design process. This early design-stage analysis can be accomplished through function-based reasoning on a qualitative behavior simulation of the system. Reasoning on the functional effect of failures provides designers with the information needed to understand the potential impact of faults. This paper proposes three different methods for evaluating and grouping the results of a function failure analysis and their use in design decision-making. Specifically, a method of clustering failure analysis results based on consequence is presented to identify groups of critical failures. A method of clustering using Latent Class Analysis provides characterization of high-level, emergent system failure behavior. Finally, a method of identifying functional similarity provides lists of similar and identical functional effects to a system state of interest. These three methods are applied to the function-based failure analysis results of 677 single and multiple fault scenarios in an electrical power system. The risk-based clustering found three distinct levels of scenario functional impact. The Latent Class Analysis identified five separate failure modes of the system. Finally, the similarity grouping identified different groups of scenarios with identical and similar functional impact to specific scenarios of interest. The overall goal of this work is to provide a framework for making design decisions that decrease system risks.

Commentary by Dr. Valentin Fuster
2012;():1065-1076. doi:10.1115/DETC2012-70483.

Modern automotive and aerospace products are large cyber-physical system consisting of software, mechanical, electrical and electronic components. The increasing complexity of such systems is a major concern as it impacts development time and effort, as well as, initial and operational costs. Although much literature exists on complexity metrics, very little work has been done in determining if metrics correlate with real world products. Aspects of complexity include the product structure, development process and manufacturing. Since all these aspects can be uniformly represented in the form of networks, we examine common network based complexity measures in this paper. Network metrics are grouped into three categories: size complexity, numeric complexity (degree of coupling) and technological complexity (solvability). Several empirical studies were undertaken to determine the efficacy of various metrics. One approach was to survey project engineers in an aerospace company to gauge their perception of complexity. The second was through case studies of alternative designs to perform equivalent functions. The third was to look at actual time, labor data from past projects. Data structures and fast algorithms for complexity calculations for large cyber physical systems were also implemented.

Commentary by Dr. Valentin Fuster
2012;():1077-1086. doi:10.1115/DETC2012-70534.

One of the primary goals of the Adaptive Vehicle Make (AVM) program of DARPA is the construction of a model-based design flow and tool chain, META, that will provide significant productivity increase in the development of complex cyber-physical systems. In model-based design, modeling languages and their underlying semantics play fundamental role in achieving compositionality. A significant challenge in the META design flow is the heterogeneity of the design space. This challenge is compounded by the need for rapidly evolving the design flow and the suite of modeling languages supporting it. Heterogeneity of models and modeling languages is addressed by the development of a model integration language – CyPhy – supporting constructs needed for modeling the interactions among different modeling domains. CyPhy targets simplicity: only those abstractions are imported from the individual modeling domains to CyPhy that are required for expressing relationships across sub-domains. This “semantic interface” between CyPhy and the modeling domains is formally defined, evolved as needed and verified for essential properties (such as well-formedness and invariance). Due to the need for rapid evolvability, defining semantics for CyPhy is not a “one-shot” activity; updates, revisions and extensions are ongoing and their correctness has significant implications on the overall consistency of the META tool chain. The focus of this paper is the methods and tools used for this purpose: the META Semantic Backplane. The Semantic Backplane is based on a mathematical framework provided by term algebra and logics, incorporates a tool suite for specifying, validating and using formal structural and behavioral semantics of modeling languages, and includes a library of metamodels and specifications of model transformations.

Commentary by Dr. Valentin Fuster
2012;():1087-1096. doi:10.1115/DETC2012-70542.

In this paper, a model-based failure identification and propagation (MFIP) framework is introduced for early identification of potential safety issues caused by environmental disturbances and subsystem failures within a complex avionic system. The MFIP framework maps hazards and vulnerability modes to specific components in the system and analyzes failure propagation paths. Block definition diagrams (BDD) are used to represent system functional requirements in the form of demonstrating the relationships between various requirements, their associations, generalizations, as well as dependencies. These concept models help to identify hazardous factors and the relationships through which their detrimental effects are transferred through-out the proposed system architecture. As such, the approach provides the opportunity to reduce costs associated with redesign and provide important information on design viability. Using this technique, designers can examine the impacts of environmental and subsystem risks on the overall system during the early stages of design and develop hazard mitigation strategies.

Commentary by Dr. Valentin Fuster
2012;():1097-1103. doi:10.1115/DETC2012-70710.

Design of a system starts with functional requirements and expected contexts of use. Early design sketches create a topology of components that a designer expects can satisfy the requirements. The methodology described here enables a designer to test an early design qualitatively against qualitative versions of the requirements and environment. Components can be specified with qualitative relations of the output to inputs, and one can create similar qualitative models of requirements, contexts of use and the environment. No numeric parameter values need to be specified to test a design. Our qualitative approach (QRM) simulates the behavior of the design, producing an envisionment (graph of qualitative states) that represents all qualitatively distinct behaviors of the system in the context of use. In this paper, we show how the envisionment can be used to verify the reachability of required states, to identify implicit requirements that should be made explicit, and to provide guidance for detailed design. Furthermore, we illustrate the utility of qualitative simulation in the context of a topological design space exploration tool.

Topics: Simulation , Design
Commentary by Dr. Valentin Fuster
2012;():1105-1110. doi:10.1115/DETC2012-70791.

The main goal of META design is to achieve a factor five (5x) improvement in product development speed for cyber-electro-physical systems compared to current practice. The method claims to achieve this speedup by a combination of three main mechanisms:

1. The deliberate use of layers of abstraction. High-level functional requirements are used to explore architectures immediately rather than waiting for downstream level 2, 3, 4 … requirements to be defined.

2. The development and use of an extensive and trusted component (C2M2L) model library. Rather than designing all components from scratch, the META process allows importing component models directly from a library in order to quickly compose functional designs.

3. The ability to find emergent behaviors and problems ahead of time during virtual Verification and Validation (V&V) and generating designs that are correct-by-construction allows a more streamlined design process and avoids costly design iterations that often lead to expensive design changes.

This paper quantifies the impact of these main META mechanisms with a sophisticated System Dynamics (SD) model that allows simulating development projects over time. META compares favorably against a simulation of a traditional design process due to the generation of late engineering changes in a traditional design-build-test-redesign environment. The benchmark case analyzed in this paper contained 3,000 requirements, and the results show a dramatic improvement for project completion schedule with a demonstrated speedup factor of 4.4 (70 months versus about 16 months). In the simulated META process we used 3 layers of abstraction, 50% novelty and a model library integrity of 80% with 70% of problems are caught early. The results were also validated against data from the B777 Electric Power System (EPS) design project at UTC where a speedup factor of 3.8 was demonstrated. The paper contains a useful sensitivity analysis that helps establish requirements and bounds on the META process and tool-chain itself that should enable the desired 5x speedup factor.

Topics: Design
Commentary by Dr. Valentin Fuster
2012;():1111-1119. doi:10.1115/DETC2012-71051.

Automatic design verification techniques are intended to check that a particular system design meets a set of formal requirements. When the system does not meet the requirements, some verification tools can perform culprit identification to indicate which design components contributed to the failure. With non-probabilistic verification, culprit identification is straightforward: the verifier returns a counterexample trace that shows how the system can evolve to violate the desired property, and any component involved in that trace is a potential culprit. For probabilistic verification, the problem is more complicated, because no single trace constitutes a counterexample. Given a set of execution traces that collectively refute a probabilistic property, how should we interpret those traces to find which design components are primarily responsible? This paper discusses an approach to this problem based on decision-tree learning. Our solution provides rapid, scalable, and accurate diagnosis of culprits from execution traces. It rejects distractions and accurately focuses attention on the components that primarily cause a property verification to fail.

Topics: Design , Failure
Commentary by Dr. Valentin Fuster
2012;():1121-1130. doi:10.1115/DETC2012-71266.

This paper presents the design of an instruction generation system that can be used to automatically generate instructions for complex assembly operations performed by humans on factory shop floors. Multimodal information—text, graphical annotations, and 3D animations—is used to create easy-to-follow instructions. This thereby reduces learning time and eliminates the possibility of assembly errors. An automated motion planning subsystem computes a collision-free path for each part from its initial posture in a crowded scene onto its final posture in the current subassembly. Visualization of this computed motion results in generation of 3D animations. The system also consists of an automated part identification module that enables the human to identify, and pick, the correct part from a set of similar looking parts. The system’s ability to automatically translate assembly plans into instructions enables a significant reduction in the time taken to generate instructions and update them in response to design changes.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2012;():1131-1141. doi:10.1115/DETC2012-71378.

We describe the use of the Cyber-Physical Modeling Language (CyPhyML) to support trade studies and integration activities in system-level vehicle designs. CyPhyML captures parameterized component behavior using acausal models (i.e. hybrid bond graphs and Modelica) to enable automatic composition and synthesis of simulation models for significant vehicle subsystems. Generated simulations allow us to compare performance between different design alternatives. System behavior and evaluation are specified independently from specifications for design-space alternatives. Test bench models in CyPhyML are given in terms of generic assemblies over the entire design space, so performance can be evaluated for any selected design instance once automated design space exploration is complete. Generated Simulink models are also integrated into a mobility model for interactive 3-D simulation.

Commentary by Dr. Valentin Fuster