0

35th Design Automation Conference

2009;():3-12. doi:10.1115/DETC2009-86123.

In the work presented in this paper, we made an attempt to integrate the decisions for interrelated sub-problems of part design or selection, machine loading and machining optimization in a random flexible manufacturing system (FMS). The main purpose was to come up with an optimization model for achieving more generic and consistent decisions for the FMS and which can be practically implemented on the shop floor to help designers and other engineers in several ways, including, for instance, to optimize the designs of parts for specific FMS. In order to attain the generic decisions, an integer nonlinear programming (INLP) problem was formulated and solved to maximize the FMS throughput. Based on the results, the part design or selection, machine loading and machining optimization decisions can be simultaneously made. To get more insights of the results and also to check the validity of the model, a two-factor full factorial design was implemented for the sensitivity analysis, analysis of variance (ANOVA) and residual analysis. The computational analyses show that the tooling budget and available processing time were both statistically significant to throughput and confirmed that the model is valid with the data normally distributed.

Commentary by Dr. Valentin Fuster
2009;():13-22. doi:10.1115/DETC2009-86780.

A method for automating the design of simple and compound gear trains using graph grammars is described. The resulting computational tool removes the tedium for engineering designers searching through the immense number of possible gear choices and combinations by hand. The variables that are automatically optimized by the computational tool include the gear dimensions as well as the location of the gears in space. The gear trains are optimized using a three-step process. The first step is a tree-search based on a language of gear rules which represent all possible gear train configurations. The second step optimizes the discrete values such as number of teeth through an exhaustive search of a gear catalog. The final step is a gradient-based algorithm which optimizes the non-discrete variables such as angles and lengths in the positioning of the gears. The advantage of this method is that the graph grammar allows all possible simple and compound gear trains to be included in the search space while the method of optimization ensures the optimal candidate for a given problem is chosen with the tuned parameters.

Topics: Gears , Optimization , Trains
Commentary by Dr. Valentin Fuster
2009;():23-32. doi:10.1115/DETC2009-87012.

In this paper, a generalization is suggested for the Heuristic Gradient Projection method. The previous Heuristic Gradient Projection method (HGP) has been developed for 3D-frame design and optimization. It mainly employed bending stress relations in order to simplify the process of iterations for stress constrained optimization. The General Heuristic Gradient Projection (GHGP) is used in a more general form to satisfy the stress constraints. Another direct search method is hybridized to satisfy other constraints on deflection. Two examples are solved using the new method. The proposed method is compared with the Hybrid Fuzzy Heuristic technique (FHGP) when solving a MEMS resonator. Results showed that the proposed hybrid technique with (GHGP) converges to the optimum solutions faster by an 8%. The MEMS weight is also decreased by 23.7%. For a macro level, the GHGP improved the solution time by 33.3%. The hybrid technique with (GHGP) improved the stresses in the members of the optimum ten-member cantilever.

Commentary by Dr. Valentin Fuster
2009;():33-43. doi:10.1115/DETC2009-87072.

This paper presents an automated algorithm for design of vehicle structures for crashworthiness, based on the analyses of the structural crash mode. The crash mode (CM) is the history of the deformation of the different zones of the vehicle structure during a crash event. The algorithm emulates a process called crash mode matching where structural crashworthiness is improved by manually modifying the design until its crash mode matches the one the designers deem as optimal. Given an initial design and a desired crash mode, the algorithm iteratively finds new designs that have better crashworthiness performance via stochastic sampling of the design space. A new design is chosen per iteration as the best among the normally distributed samples near the current design. The mean and standard deviation of the normal distributions are adjusted in each iteration by examining the crash mode of the current design and applying a set of fuzzy logic rules that encapsulate elementary knowledge of the crash mode matching practice. Two case studies are presented to demonstrate the effectiveness of the proposed algorithm. The case studies examine a front half vehicle model, as well as a fully detailed vehicle model.

Commentary by Dr. Valentin Fuster
2009;():45-56. doi:10.1115/DETC2009-87461.

Dimensional analysis is a powerful tool used commonly to develop functional relations between variables affecting a physical system. Known for its versatility in incorporating several different energy domains, the method has allowed for dimensional manipulation of diverse parameters. However, the dimensional combinations developed have been, for the most part, strictly confined to static and time invariant systems. Expanding on the process we introduce a graphical approach to dimensionally model dynamic systems for design. Continuing this extension, we present a graphical and topological combination to illustrate the applicability of dimensional analysis as a state equation generation tool. This tool is comparable to conventional differential element analysis in system dynamics, but provides a systematic and visual approach to design modeling. Further, this exposition acts as a learning instrument where complex engineering equations are derived and interpreted through visual perception similar to a block diagram or flow chart. A dynamic system, in the form of a compressed air–water rocket, is also evaluated in this paper for elucidation.

Commentary by Dr. Valentin Fuster
2009;():57-68. doi:10.1115/DETC2009-87482.

Complex systems need to perform in a variety of functional states and under varying operating conditions. Therefore, it is important to manage the different values of design variables associated with the operating states for each subsystem. The research presented in this paper uses multidisciplinary optimization (MDO) and changeable systems methods together in the design of a reconfigurable Unmanned Aerial Vehicle (UAV). MDO is a useful approach for designing a system that is composed of distinct disciplinary subsystems by managing the design variable coupling between the subsystem and system level optimization problems. Changeable design research addresses how changes in the physical configuration of products and systems can better meet distinct needs of different operating states. As a step towards the development of a realistic reconfigurable UAV optimization problem, this paper focuses on the performance advantage of using a changeable airfoil subsystem. Design principles from transformational design methods are used to develop concepts that determine how the design variables are allowed to change in the mathematical optimization problem. The performance of two changeable airfoil concepts is compared to a fixed airfoil design over two different missions that are defined by a sequence of mission segments. Determining the configurations of the static and changeable airfoils is accomplished using a genetic algorithm. Results from this study show that aircraft with changeable airfoils attain increased performance, and that the manner by which the system transforms is significant. For this reason, the changeable airfoil optimization developed in this paper is ready to be integrated into a complete MDO problem for the design of a reconfigurable UAV.

Commentary by Dr. Valentin Fuster
2009;():69-77. doi:10.1115/DETC2009-87499.

This paper addresses the design optimization of a special class of steel structures, which is clear-span building built up via off-shelf standard steel-sections. The problem is of particular importance in small to medium span buildings due to an attractive opportunity for reduction of the manufacturing cost compared to trusses and custom-built beams. The problem is also difficult from an optimization perspective as it exhibits both continuous and discrete variables, as well as discontinuities and flat regions in the topology of the objective function. Genetic algorithms (GA) and a special stochastic sampling technique are considered for the problem, as well as a mixed GA and stochastic sampling approach. The stochastic sampling is guided via heuristic rules based on knowledge specific to the problem, and is thus perceived well suited to the optimization task. While all the tested algorithms produced satisfactory results, the mixed approach seemed to yield the most consistent performance.

Commentary by Dr. Valentin Fuster
2009;():79-90. doi:10.1115/DETC2009-87522.

Modeling, simulation, and optimization play vital roles throughout the engineering design process; however, in many design disciplines the cost of simulation is high, and designers are faced with a tradeoff between the number of alternatives that can be evaluated and the accuracy with which they are evaluated. In this paper, a methodology is presented for using models of various levels of fidelity during the optimization process. The intent is to use inexpensive, low-fidelity models with limited accuracy to recognize poor design alternatives and reserve the high-fidelity, accurate, but also expensive models only to characterize the best alternatives. Specifically, by setting a user-defined performance threshold, the optimizer can explore the design space using a low-fidelity model by default, and switch to a higher fidelity model only if the performance threshold is attained. In this manner, the high fidelity model is used only to discern the best solution from the set of good solutions, so computational resources are conserved until the optimizer is close to the solution. This makes the optimization process more efficient without sacrificing the quality of the solution. The method is illustrated by optimizing the trajectory of a hydraulic backhoe. To characterize the robustness and efficiency of the method, a design space exploration is performed using both the low and high fidelity models, and the optimization problem is solved multiple times using the variable fidelity framework.

Commentary by Dr. Valentin Fuster
2009;():91-100. doi:10.1115/DETC2009-86887.

A polytope-based representation is presented to approximate the feasible space of a design concept that is described mathematically using constraints. A method for constructing such design spaces is also introduced. Constraints include equality and inequality relationships between design variables and performance parameters. The design space is represented as a finite set of (at most) 3-dimensional (possibly non-convex) polytopes, i.e., points, intervals, polygons (both open and closed) and polyhedra (both open and closed). These polytopes approximate the locally connected design space around an initial feasible point. The algorithm for constructing the design space is developed by adapting consistency algorithm for polytope representations.

Topics: Space , Design
Commentary by Dr. Valentin Fuster
2009;():101-110. doi:10.1115/DETC2009-86899.

Modularization of parts — a fairly recent trend in product development — facilitates part definitions in a standardized, machine-readable form, so that we can define a part based on its input(s), output(s), features, and geometric information. Standardizing part definitions will enable manufacturing companies to more easily identify part suppliers in global, virtual environments. This standard representation of parts also facilitates modular product design during parametric design. We will show that this problem of modular product design can be formulated as an AI Planning problem, and we propose a solution framework to support modular product design. Using part specification information for personal computers, we demonstrate the proposed framework and discuss its implications for global manufacturing.

Commentary by Dr. Valentin Fuster
2009;():111-121. doi:10.1115/DETC2009-86935.

In today’s economy, engineering companies strive to reduce product development time and costs. One approach to assisting this goal is to introduce computer-aided methods and tools earlier in the development process. This requires providing robust design automation methods and tools that can support design synthesis and the generation of alternative design configurations, in addition to automated geometric design. A new method for automated gearbox design, tailored for integration within an existing commercial gearbox analysis tool, is described in this paper. The method combines a rule-based generative approach, based on a previous parallel grammar approach for mechanical gear systems, with domain specific heuristics and stochastic search using simulated annealing. Given design specifications that include a bounding box, the number of required speeds and their target ratios, a range of valid gearbox configurations is generated from a minimal initial configuration. Initial test results show that this new method is able to generate a variety of designs which meet the design specifications. The paper concludes with a discussion of the method’s current limitations and a description of the work currently underway to improve and extend its capabilities.

Commentary by Dr. Valentin Fuster
2009;():123-130. doi:10.1115/DETC2009-87448.

Control tasks involving dramatic non-linearities, such as decision making, can be challenging for classical design methods. However, autonomous stochastic design methods have proved effective. In particular, Genetic Algorithms (GA) that create phenotypes by the application of genotypes comprising rules are robust and highly scalable. Such encodings are useful for complex applications such as artificial neural net design. This paper outlines an evolutionary algorithm that creates C++ programs which in turn create Artificial Neural Networks (ANNs) that can functionally perform as an exclusive-OR logic gate. Furthermore, the GAs are able to create scalable ANNs robust enough to feature redundancies that allow the network to function despite internal failures.

Commentary by Dr. Valentin Fuster
2009;():131-140. doi:10.1115/DETC2009-86457.

The early evaluation of a proposed function structure for a product and also, the possibility to expose the potential failures related to this provides that the design process can be modeled in its entirety. However, so far there are no existed suitable models for the early phase of design process. This article presents an integrated approach aimed to explore the behaviors of concept designs in the early design phase. The approach is founded on a combination of Petri net, π-numbers, qualitative physics principles and Design Structure Matrix. The final aim is to implement this method on the SysML modeling language to integrate a simulation approach that is initially not standardized in the language. A second interest of the approach is to provide a coherent simulation framework that can be used as a reference to verify the coherency of other simulation models further in the design process.

Commentary by Dr. Valentin Fuster
2009;():141-150. doi:10.1115/DETC2009-86470.

The modularity indicates a one-to-one mapping between functional concepts and physical components. It can allow us to generate more product varieties at lower costs. Functional concepts can be described by precise syntactic structures with functional terms. Different semantic measures can be used to evaluate the strength of the semantic link between two functional concepts from port ontology. In this paper, different methods of modularity based on ontology are first investigated. Secondly, the primitive concepts are presented based on port ontology by using natural language, and then their semantic synthesis is used to describe component ontology. The taxonomy of port-based ontology are built to map the component connections and interactions in order to build functional blocks. Next, propose an approach to computing semantic similarity by mapping terms to functional ontology and by examining their relationships based on port ontology language. Furthermore, several modules are partitioned on the basis of similarity measures. The process of module construction is described and its elements are related to the similarity values between concepts. Finally, a case is studied to show the efficiency of port ontology semantic similarity for modular concept generation.

Topics: Ontologies
Commentary by Dr. Valentin Fuster
2009;():151-161. doi:10.1115/DETC2009-86496.

Many high fidelity analysis tools including finite-element analysis and computational fluid dynamics have become an integral part of the design process. However, these tools were developed for detailed design and are inadequate for conceptual design due to complexity and turnaround time. With the development of more complex technologies and systems, decisions made earlier in the design process have become crucial to product success. Therefore, one possible alternative to high fidelity analysis tools for conceptual design is metamodeling. Metamodels generated upon high fidelity analysis datasets from previous design iterations show large potential to represent the overall trends of the dataset. To determine which metamodeling techniques were best suited to handle high fidelity datasets for conceptual design, an implementation scheme for incorporating Polynomial Response Surface (PRS) methods, Kriging Approximations, and Radial Basis Function Neural Networks (RBFNN) was developed. This paper presents the development of a conceptual design metamodeling strategy. Initially high fidelity legacy datasets were generated from FEA simulations. Metamodels were then built upon the legacy datasets. Finally, metamodel performance was evaluated based upon several dataset conditions including various sample sizes, dataset linearity, interpolation within a domain, and extrapolation outside a domain.

Commentary by Dr. Valentin Fuster
2009;():163-173. doi:10.1115/DETC2009-86503.

The developing new products of high quality, low unit cost, and shortening lead time to markets are the key elements required for enterprises to obtain competitive advantages. In order to improve the creativity and shorten lead time to markets, a methodology of automatic virtual entity simulation of conceptual design results is proposed. At the end of conceptual design, the conceptual design results are expressed in the symbolic schemes generated by the computerized approach with a higher capability to obtain the innovative conceptual design. Then, the symbolic scheme is identified into basic mechanisms and their connections. To the identified basic mechanisms, their kinematic analysis is carried out by matching basic Barranov trusses, and their virtual entities are modeled based on feature-based technique and encapsulated as one design object. Based on the structures of the basic mechanisms and their connections, a space layout to the mechanical system corresponding to the symbolic scheme is fulfilled then. With the pre-assembly approach, all parts in the mechanical system are put onto proper positions where the constraint equations are met. In this way, the virtual entity assembly model of the mechanical system corresponding to the symbolic scheme is set up. Changing the positions of the driving links continually, the virtual entity simulation of the mechanical system will be fulfilled. As a result, with the aid of this approach, we can not only obtain innovative conceptual design results with excellent performances, but also shorten the design time and the cost of product developments.

Commentary by Dr. Valentin Fuster
2009;():175-187. doi:10.1115/DETC2009-87120.

Sensitivity analyses are frequently used during the design process of engineering systems to qualify and quantify the effect of parametric variation on the performance of a system. Two primary types of sensitivity analyses are generally used: local and global. Local analyses, generally involving derivative-based measures, have a significantly lower computational burden than global analyses but only provide measures of sensitivity around a nominal point. Global analyses, generally performed with a Monte Carlo sampling approach and variation-based measures, provide a complete description of a concept’s sensitivity but incur a large computational burden and require significantly more information regarding the distributions of the design parameters in a concept. Local analyses are generally suited to the early stages of design when parametric information is limited and a large number of concepts must be considered (necessitating a light computational burden). Global analyses are more suited towards the later stages of design when more information regarding parametric distributions is available and fewer concepts are being considered. Current derivative-based local approaches provide a significantly different set of measures than a global variation-based analysis. This makes a direct comparison of local to global measures impossible. To reconcile local and global sensitivity analyses, a hybrid local variation based sensitivity (HyVar) approach is presented. This approach has a similar computational burden to a local approach but produces measures in the same format as a global variation-based approach (contribution percentages). This HyVar sensitivity analysis is developed in the context of a functionality-based design and behavioral modeling framework. An example application of the method is presented along with a summary of results produced from a more comprehensive example.

Commentary by Dr. Valentin Fuster
2009;():189-202. doi:10.1115/DETC2009-87311.

A key challenge facing designers creating innovative products is concept generation in conceptual design. Conceptual design can be more effective when the design space is broad and accelerated by including problem solving and solution triggering tools in its structure. The design space can be broadened by using an integrated design of product and material concepts approach. In this approach, structured analogy is used to transfer underlying principles from a solution suitable in one domain (i.e., product or mechanical domain) to an analogous solution in another domain (i.e., materials domain). The nature of design analogy does not require as full of an exploration of the target domain as would otherwise be necessary; affording the possibility of a more rapid development. The addition of problem solving and solution triggering tools also decreases the design time and/or improves the quality of the final solution. The fulfillment of this is realized through a combination of the Theory of Inventive Problem Solving (TRIZ) proposed by Altshuller, and the systematic approach of Pahl and Beitz, for products that are jointly considered at the material and product level. These types of problems are ones where a designer seeks to fulfill performance requirements placed on the product generally through both the product and the designed material. In this method, the systematic approach of Pahl and Beitz is used as the base method, and TRIZ is used as a means of transferring abstract information about the design problem between the domains with an aim of accelerating the conceptual design process. This approach also allows for cross design approach tools such as S-Field-Model-CAD integration with design repositories to be used to transfer information at different levels of abstraction; expanding the design space and effectively directing the designer. The explanation of this approach is presented through a very simple example of a spring design improvement.

Commentary by Dr. Valentin Fuster
2009;():203-227. doi:10.1115/DETC2009-87420.

Transforming products, or more generally transformers, are devices that change state in order to facilitate new, or enhance an existing, functionality. Mechanical transformers relate to products that reconfigure and can be advantageous by providing multiple functions, while often conserving space. A basic example is a foldable chair that can be stowed when not in use, but provides ergonomic and structural seating when deployed. Utilizing transformation can also lead to novel designs that combine functions across domains, such as an amphibious vehicle that provides both terrestrial and aquatic transportation. In order to harness these assets of transformation, the Transformational Design Theory [1] was developed. This theory outlines a set of principles and facilitators that describe and embody transformation for the purpose of systematically assisting the design of transformers. To build on this theory, this paper analyzes a repository of popular transformer toys. Transformer toys are chosen for this study because of their richness in displaying a variety of kinematic aspects of transformation. Through this process, new definitions to describe transformation are garnered and a set of guidelines are developed to further aid designers. The empirical data set of transformer toys is rich in information and provides a basis for application to other fields, such as robotics and consumer products. These insights, in conjunction with the use of storyboarding, create a new method of designing transformers. This paper presents the method and concludes with a validation exercise in the creation of a new transformer toy.

Commentary by Dr. Valentin Fuster
2009;():229-237. doi:10.1115/DETC2009-87474.

Six functions are identified as the most critical (“core” functions) to transportation vehicle energy systems. These selections are validated through analysis of 25 function structures as well as observations of a number of existing energy systems. Identifying which of the core functions and which of the energy types are involved in a given energy system is the Core-Function Modeling strategy (CFM strategy). These functions and energy types (the framework of CFM) are used to categorize approximately fifty processes and devices. This list is the Energy Morph Matrix (EMM). An experiment is performed that demonstrates how the EMM can be used as an aid to the concept generation process. The EMM also adapts well to a more automated approach for designing energy systems when used in combination with a search algorithm to identify chains of energy components. By incorporating a metric such as system efficiency or energy density into the search and computing this metric for each chain of energy components, these chains can be ranked and leading candidates can be highlighted for further analysis.

Commentary by Dr. Valentin Fuster
2009;():239-248. doi:10.1115/DETC2009-87603.

This article presents a generic method to solve 2D multiobjective placement problem for free-form components. The proposed method is a relaxed placement technique combined with an hybrid algorithm based on a genetic algorithm and a separation algorithm. The genetic algorithm is used as a global optimizer and is in charge of efficiently exploring the search space. The separation algorithm is used to legalize solutions proposed by the global optimizer, so that placement constraints are satisfied. A test case illustrates the application of the proposed method. Extensions for solving the 3D problem are given at the end of the article.

Topics: Algorithms
Commentary by Dr. Valentin Fuster
2009;():249-260. doi:10.1115/DETC2009-87776.

This article focuses on a key phase of the conceptual design, the synthesis of structural concepts of solution. Several authors have described this phase of Engineering Design. The Function-Behavior-Structure (FBS) is one of these models. This study is based on the combined use of a modified version of Gero’s FBS model and the latest developments of modeling languages for systems engineering. System Modeling Language (SysML) is a general-purpose graphical modeling language for specifying, analyzing, designing, and verifying complex systems. Our development shows how SysML types of diagrams match with our updated vision of the FBS model of conceptual design. The objective of this paper is to present the possibility to use artificial intelligence tools as members of the design team for supporting the synthesis process. The common point of expert systems developed during last decades for the synthesis of conceptual solutions is that their knowledge bases were application dependent. Latest research in the field of Ontology showed the possibility to build knowledge representations in a reusable and shareable manner. This allows the construction of knowledge representation for engineering in a more generic manner and dynamic mapping of the ontology layers. We present here how processing on ontology allows the synthesis of conceptual solutions.

Commentary by Dr. Valentin Fuster
2009;():261-272. doi:10.1115/DETC2009-86928.

Successfully optimization of product designs calls for the continuous evolution of optimized design solutions, which is best achieved by collaboration among a group of experts who understand the intricacies of the product’s characteristics. The achievement of successful collaborations depends on optimization methodologies that focus on design characteristics located at deeper levels of hierarchically decomposed design problems, and the construction of optimization scenarios that have an explicit goal of maximizing the expected profits that result from the collaboration. This paper proposes methodologies and procedures based on hierarchical optimizations that aim to effectively conduct collaborative product design optimizations. The proposed methodologies are applied to a machine product design, and their effectiveness is demonstrated.

Commentary by Dr. Valentin Fuster
2009;():273-281. doi:10.1115/DETC2009-87225.

This paper explores the effect of reward interdependence of strategies in a cooperative evolving team on the performance of the team. Experiments extending the Evolutionary Multi-Agent Systems (EMAS) framework to three dimensional layout are designed which examine the effect of rewarding helpful, in addition to effective strategies on the convergence of the system. Analysis of communication within the system suggests that some agents (strategies) are more effective at creating helpful solutions than creating good solutions. Despite their potential impact as enablers for other strategies, when their efforts were not rewarded, these assistant agent types were quickly removed from the population. When reward was interdependent, however, this secondary group of helpful agents remained in the population longer. As a result, effective communication channels remained open and the system converged more quickly. The results support conclusions of organizational behavior experimentation and computational modeling. The implications of this study are twofold. First, computational design teams may be made more effective by recognizing and rewarding indirect contributions of some strategies to the success of others. Secondly, EMAS may provide a platform for predicting the effectiveness of different reward structures given a set of strategies in both human and computational teams.

Commentary by Dr. Valentin Fuster
2009;():283-302. doi:10.1115/DETC2009-87327.

An information system that works in one application and environment may not work in another. Successful adoption of information systems requires that the organization evaluate candidates to ensure that they satisfy intended goals and consider the backgrounds and capabilities of the users. This paper describes an approach for evaluating and implementing information systems that satisfy technical requirements and organizational goals. Integral to this approach is the use of an assessment instrument consisting of objective-driven rubrics that are redundant to ensure consistency. This approach is applied in an NSF-supported CI-TEAM project to evaluate candidate systems to support an online cyber-collaboratory to enhance product dissection and reverse engineering activities in the classroom and to suggest improvements for the next generation system.

Commentary by Dr. Valentin Fuster
2009;():303-313. doi:10.1115/DETC2009-87541.

A set-based approach to collaborative design is presented, in which Bayesian networks are used to represent promising regions of the design space. In collaborative design exploration, complex multilevel design problems are often decomposed into distributed subproblems that are linked by shared or coupled parameters. Collaborating designers often prefer conflicting values for these coupled parameters, resulting in incompatibilities that require substantial iteration to resolve, extending the design process lead time without guarantee of achieving a good design. In the proposed approach to collaborative design, each designer builds a locally developed Bayesian network that represents regions of interest in his design space. Then, these local networks are shared and combined with those of collaborating designers to promote more efficient local design space search that takes into account the interests of one’s collaborators. The proposed method has the potential to capture a designer’s preferences for arbitrarily shaped and potentially disconnected regions of the design space in order to identify compatible or conflicting preferences between collaborators and to facilitate a compromise if necessary. It also sets the stage for a flexible and concurrent design process with varying degrees of designer involvement that can support different designer strategies such as hill-climbing or region identification. The potential benefits are the capture of expert knowledge for future use as well as conflict identification and resolution. This paper presents an overview of the proposed method as well as an example implementation for the design of an unmanned aerial vehicle.

Topics: Design , Networks
Commentary by Dr. Valentin Fuster
2009;():315-327. doi:10.1115/DETC2009-86462.

This paper presents a method for assessing the quality of progressive design processes that seek to maximize the profitability of the product that is being designed. The proposed approach uses separations, a type of problem decomposition, to model progressive design processes. The subproblems in the separations correspond roughly to phases in the progressive design processes. We simulate the choices of a bounded rational designer for each subproblem using different search algorithms. We consider different types and versions of these search processes in order to determine if the results are robust to the decision-making model. We use a simple two-variable problem to help describe the approach and then apply the approach to assess motor design processes. Methods for assessing the quality of engineering design processes can be used to guide improvements to engineering design processes and generate more valuable products.

Commentary by Dr. Valentin Fuster
2009;():329-335. doi:10.1115/DETC2009-87578.

Engineering designs are often determined by functional considerations, yet modeled with purely geometric parameters. Because of this, a difficult part of a design engineer’s job is tracking how changes to the geometric model might alter the functional performance of the design. This paper proposes a design interface that uses temporary functional views of geometric models to augment design engineers, helping them explore design space while continuously apprising them of the implications that modifications have on a design. The basic approach of the proposed interface is to use fast, interactive analysis tools in combination with feedback mechanisms to create temporary, functional design handles on top of the underlying geometric parametric structure. This design exploration tool is implemented in a research CAD system and demonstrated on illustrative examples.

Commentary by Dr. Valentin Fuster
2009;():337-349. doi:10.1115/DETC2009-87600.

Early stages of engineering design processes are characterized by high levels of uncertainty due to incomplete knowledge. As the design progresses, additional information is externally added or internally generated within the design process. As a result, the design solution becomes increasingly well-defined and the uncertainty of the problem reduces, diminishing to zero at the end of the process when the design is fully defined. In this research a measure of uncertainty is proposed for a class of engineering design problems called discrete design problems. Previously, three components of complexity in engineering design, namely, size, coupling and solvability, were identified. In this research uncertainty is measured in terms of the number of design variables (size) and the dependency between the variables (coupling). The solvability of each variable is assumed to be uniform for the sake of simplicity. The dependency between two variables is measured as the effect of a decision made on one variable on the solution options available to the other variable. A measure of uncertainty is developed based on this premise, and applied to an example problem to monitor uncertainty reduction through the design process. Results are used to identify and compare three task-sequencing strategies in engineering design.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():351-359. doi:10.1115/DETC2009-87688.

A supply chain connects product suppliers, manufacturers, as well as customers with the goal of managerial efficiency. Meanwhile, product design emphasizes the engineering efficiency of a product. Both supply chain management and product design have been drawing attention from numerous researchers. However, there has been only limited research on the integration of product design and supply chain. Despite this fact, there is significant potential for synergy in the integration of engineering and supply chain management, as well as managerial concepts into product design. In the paper, we present a methodology to form this synergistic connection. The methodology presented first generates functional requirements of a product. A design repository is then utilized to synthesize potential components of all sub-functions, providing multiple options for the potential conceptual designs. These concepts are screened by using a Design for Assembly (DfA) index and then a Design for Supply Chain (DfSC) index to select the best concept. An example from the bicycle industry is presented to demonstrate the benefit of supply chain considerations at the conceptual design phase.

Commentary by Dr. Valentin Fuster
2009;():361-370. doi:10.1115/DETC2009-87026.

Choice models play a critical role in enterprise-driven design by providing a link between engineering design attributes and customer preferences. In our previous work, we introduced the hierarchical choice modeling approach to address the special needs in complex engineering system design. The hierarchical choice modeling approach utilizes multiple model levels to create a link between qualitative attributes considered by consumers when selecting a product and quantitative attributes used for engineering design. In this work, the approach is expanded to the Bayesian Hierarchical Choice Modeling (BHCM) framework, estimated using an All-at-Once (AAO) solution procedure. This new framework addresses the shortcomings of the previous method while providing a highly flexible modeling framework to address the needs of complex system design. In this framework, both systematic and random consumer heterogeneity is explicitly considered, the ability to combine multiple sources of data for model estimation and updating is significantly expanded, and a method to mitigate error propagated throughout the model hierarchy is developed. In addition to developing the new choice model approach, the importance of including a complete representation of consumer heterogeneity in the model framework is provided. The new modeling framework is validated using several metrics and techniques. The benefits of the BHCM method are demonstrated in the design of an automobile occupant package.

Commentary by Dr. Valentin Fuster
2009;():371-383. doi:10.1115/DETC2009-87049.

This paper presents a comparative study of choice modeling and classification techniques that are currently being employed in the engineering design community to understand customer purchasing behavior. An in-depth comparison of two similar but distinctive techniques — the Discrete Choice Analysis (DCA) model and the C4.5 Decision Tree (DT) classification model — is performed, highlighting the strengths and limitations of each approach in relation to customer choice preferences modeling. A vehicle data set from a well established data repository is used to evaluate each model based on certain performance metrics; how the models differ in making predictions/classifications, computational complexity (challenges of model generation), ease of model interpretation and robustness of the model in regards to sensitivity analysis, and scale/size of data. The results reveal that both the Discrete Choice Analysis model and the C4.5 Decision Tree classification model can be used at different stages of product design and development to understand and model customer interests and choice behavior. We however believe that the C4.5 Decision Tree may be better suited in predicting attribute relevance in relation to classifying choice patterns while the Discrete Choice Analysis model is better suited to quantify the choice share of each customer choice alternative.

Commentary by Dr. Valentin Fuster
2009;():385-395. doi:10.1115/DETC2009-87052.

Choice modeling is critical for assessing customer preferences as a function of product design attributes and customer profile information. Previous works have focused upon the use of survey data in which respondents are presented with a set of simulated product options from which they make a choice. However, such data does not represent real purchase behavior and these surveys require significant time and additional cost to administer. For these reasons, an approach to estimate a choice model using widely available customer satisfaction survey data for actual purchases is developed. Through a close examination of customer satisfaction survey data, several key characteristics are identified, including the lack of defined choice sets and missing choice attributes, the use of subjective measures such as ratings by customers to describe product attributes, multiple collinearity among many of the product attributes, and potentially insufficient attribute variation in the product designs evaluated by the respondents in the survey. A mixed logit based choice modeling procedure is developed in this paper to incorporate the use of both survey ratings as subjective measures and engineering attributes as quantitative measures in the model utility function. In order to accurately reflect choice behavior in actual market conditions, heterogeneity in customer preference is explicitly considered in the demand model. A case study using the Vehicle Quality Survey data acquired from J.D. Power and Associates demonstrates many of the key features of the proposed approach. The estimation results show the mixed logit model to be successful in modeling customer choices at the individual level, demonstrating the potential of being integrated with engineering models for engineering design.

Commentary by Dr. Valentin Fuster
2009;():397-406. doi:10.1115/DETC2009-87165.

This paper articulates some of the challenges for what has been an implicit goal of design for market systems research: To predict demand for differentiated products so that counterfactual experiments can be performed based on changes to the product design (i.e., attributes). We present a set of methods for examining econometric models of consumer demand for their suitability in product design studies. We use these methods to test the hypothesis that automotive demand models that allow for nonlinear horizontal differentiation perform better than the conventional functional forms, which emphasize vertical differentiation. We estimate these two forms of consumer demand in the new vehicle automotive market, and find that using an ideal-point model of size preference rather than a monotonic model has model fit but different attribute substitution patterns. The generality of the evaluation methods and the range of demand model issues to be explored in future research are highlighted.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():407-418. doi:10.1115/DETC2009-87534.

Accurately capturing the future demand for a given product is a hard task in today’s new product development initiatives. As customers become more market-savvy and markets continue fragment, current demand models could greatly benefit from exploiting the rich contextual information that exists in customers’ product usage. As such, we propose a Usage Coverage Model (UCM) as a more thorough means to quantify and capture customer demand by utilizing factors of usage context in order to inform an integrated engineering design and choice modeling approach. We start by presenting the principles of the UCM model: terms, definitions, variable classes and relation classes so as to obtain a common usage language. The usage model exhibits the ability to differentiate between individuals’ product performance experiences. With Discrete Choice Analysis, individuals’ performance with a given product is compared against that of competitive products, capturing individual customers’ choice behavior and thereby creating an effective model of product demand. As a demonstration of our methods, we apply our model in a case study regarding the general task of cutting a wood board with a jigsaw tool. We conclude by presenting the scope of future work for the case study and the contribution of the entire current and future work to the field as a whole.

Topics: Modeling
Commentary by Dr. Valentin Fuster
2009;():419-431. doi:10.1115/DETC2009-87751.

Model fusion of results from disparate survey methodologies is a topic of current interest in both research and practice. Much of this interest has centered on the enrichment of stated-preference results with revealed-preference data, or vice versa, as it is considered that stated preference methods provide more robust trade-off information while revealed preference methods give better information about market equilibria. The motivation for this paper originates in the automotive industry, and is distinct in that it focuses on the reuse of existing data. Practitioners wish to glean as much information as possible from a large body of existing market research data, which may include minimally overlapping datasets and widely varying survey types. In particular, they wish to combine results from different types of stated preference methods. This paper presents two advancements in model fusion. One is a method for reducing data gathered in open-ended methods such as van Westendorp studies to a form amenable to analysis by multinomial logit, thus enabling the comparison of open-ended data to conjoint data on overlapping data sets. The other is a new statistical test for the fusibility of disparate data sets, designed to compare different methods of data comparison. This test is less sensitive than existing tests, which are most useful when comparing data sets that are substantially similar. The new test may thus provide more guidance in the development of new methods for fusing distinct survey types. Two examples are presented: a simple study of cell phone features administered as a test case for this research using both choice-based conjoint and van Westendorp methodologies, and a pair of existing larger-scale studies of automotive features with some attributes common to both studies. These examples serve to illustrate the two proposed methods. The examples indicate both a need for continued testing and several potentially fruitful directions for further investigation.

Commentary by Dr. Valentin Fuster
2009;():433-444. doi:10.1115/DETC2009-87074.

The current research proposes an integrated framework for product design that incorporates simulation-based tools into the early design stage to achieve optimum multi-scale systems. The method to determine the appropriate mesostructure-property relations for the internal material structures of the system is through a topology optimization technique and a multi-scale design process. Specifically, the Reliability-based Topology Optimization (RBTO) and the simulation-based multi-attribute design method are integrated into an Inductive Design Exploration Method (IDEM). The RBTO method is contributed to determine of optimal topologies at the meso-scale. The simulation-based multi-attribute design method is considered for decision support process of the macro-scale systems. The IDEM offers the capability for concurrent design on multiple scales providing an approach for integration of the other two methods. An example of the developed multi-scale design framework is presented in terms of a hydrogen storage tank used in hydrogen fuel cell automotive applications. The multi-scale tank design will feature a high strength mesostructured wall resulting in a large weight reduction.

Commentary by Dr. Valentin Fuster
2009;():445-453. doi:10.1115/DETC2009-87101.

A phase transition is a geometric and topological transformation process of materials from one phase to another, each of which has a unique and homogeneous physical property. Providing an initial guess of transition path for further physical simulation studies is highly desirable in materials design. In this paper, we present a metamorphosis scheme for periodic surface (PS) models by interpolation in the PS parameter space. The proposed approach creates multiple potential transition paths for further selection based on three smoothness criteria. The goal is to search for a smooth transformation in phase transition analysis.

Commentary by Dr. Valentin Fuster
2009;():455-466. doi:10.1115/DETC2009-87217.

In this paper, we introduce a design exploration method for adaptive design systems (DEM-ADS), which is proposed to manage the uncertainty in the system design process. The proposed method includes the local regression model and an inverse design procedure derived from the existing Inductive Design Exploration Method (IDEM). To demonstrate the proposed method, the design of a photonic crystal coupler and waveguide containing two subsystem analyses is presented. The result indicates that the proposed method effectively attains solutions that are robust to uncertainty in the system more efficiently than IDEM.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():467-478. doi:10.1115/DETC2009-87276.

In this paper, we introduce the construct of microstructure-mediated design of material and product. The microstructure of the material is controlled within feasible bounds to achieve the performance targets of the product. We illustrate the efficacy of this construct via the integrated robust design of a submersible and Al-based matrix composites. The integrated design is carried out using an Inductive Design Exploration Method (IDEM) that facilitates robust design in the presence of model structure uncertainty (MSU). Model structural uncertainty (MSU), originating from assumptions and idealizations in modeling processes, is a form of uncertainty that is often virtually impossible to quantify. In this paper, we demonstrate a method, the Inductive Design Exploration Method (IDEM), that facilitates robust design in the presence of model structural uncertainty. We achieve robustness by trading off the degree of system performance and the degree of reliability based on structural uncertainty associated with system models (i.e., models for performances and constraints). IDEM is demonstrated in the design of a shell of a robotic submersible. The material considered is in-situ Al metal matrix composites (MMCs) due to the advantages that the in-situ MMCs have over the conventional MMCs. This design task is a representative example of integrated materials and product design problems.

Commentary by Dr. Valentin Fuster
2009;():479-488. doi:10.1115/DETC2009-87465.

A multiscale design methodology is proposed in this paper to facilitate the design of hierarchical material and product systems with the consideration of random field uncertainty that propagates across multiple length scales. Based on the generalized hierarchical multiscale decomposition pattern in multiscale modeling, a set of computational techniques are developed to manage the complexity of multiscale design under uncertainty. Novel design of experiments and metamodeling strategies are proposed to manage the complexity of propagating random field uncertainty through three generalized levels of transformation: the material microstructure random field, the material property random field, and the probabilistic product performance. Multilevel optimization techniques are employed to find optimal design solutions at individual scales. A hierarchical multiscale design problem that involves a 2-scale (submicro- and miro-scales) material design and a macro-scale product (bracket) design is used to demonstrate the applicability and benefits of the proposed methodology.

Commentary by Dr. Valentin Fuster
2009;():489-496. doi:10.1115/DETC2009-86129.

This paper presents a GPU-based parallel Population Based Incremental Learning (PBIL) algorithm with a local search on bound constrained optimization problems. The genotype of an entire population is evolved in PBIL, which was derived from Genetic Algorithms. Graphics Processing Units (GPU) is an emerging technology for desktop parallel computing. In this research, the classical PBIL is adapted in the data-parallel GPU computing platform. The global optimal search of the PBIL is enhanced by a local Pattern Search method. The hybrid PBIL method is implemented in the GPU environment, and compared to a similar implementation in the common computing environment with a Central Processing Unit (CPU). Computational results indicate that GPU-accelerated PBIL method is effective and faster than the corresponding CPU implementation.

Topics: Optimization
Commentary by Dr. Valentin Fuster
2009;():497-505. doi:10.1115/DETC2009-86973.

A distributed variant of multi-objective particle swarm optimization (MOPSO) called multi-objective parallel asynchronous particle swarm optimization (MOPAPSO) is presented and the effects of distribution of objective function calculations to slave processors on the results and performance are investigated. Two benchmark examples were used to verify the capability of this implementation of MOPAPSO to match previously published results of MOPSO. The computationally intensive task of multi-objective Optimization Based Mechanism Synthesis (OBMS) was used to verify that significant performance improvements were realized through parallelization. The results show that MOPAPSO is able to match the results of MOPSO in significantly less time. The fact that MOPAPSO is distributed results in an effective optimization tool for complex multi-objective design problems.

Commentary by Dr. Valentin Fuster
2009;():507-516. doi:10.1115/DETC2009-87237.

Particle swarm methodologies are presented for the solution of constrained mechanical and structural system optimization problems involving single or multiple objective functions with continuous or mixed design variables. The particle swarm optimization presented is a modified particle swarm optimization approach, with better computational efficiency and solution accuracy, is based on the use of dynamic maximum velocity function and bounce method. The constraints of the optimization problem are handled using a dynamic penalty function approach. To handle the discrete design variables, the closest discrete approach is used. Multiple objective functions are handled using a modified cooperative game theory approach. The applicability and computational efficiency of the proposed particle swarm optimization approach are demonstrated through illustrate examples involving single and multiple objectives as well as continuous and mixed design variables. The present methodology is expected to be useful for the solution of a variety of practical engineering design optimization problems.

Commentary by Dr. Valentin Fuster
2009;():517-527. doi:10.1115/DETC2009-87278.

Economic and physical considerations often lead to equilibrium problems in multidisciplinary design optimization (MDO), which can be captured by MDO problems with complementarity constraints (MDO-CC) — a newly emerging class of problem. Due to the ill-posedness associated with the complementarity constraints, many existing MDO methods may have numerical difficulties solving the MDO-CC. In this paper, we propose a new decomposition algorithm for MDO-CC based on the regularization technique and inexact penalty decomposition. The algorithm is presented such that existing proofs can be extended, under certain assumptions, to show that it converges to stationary points of the original problem and that it converges locally at a superlinear rate. Numerical computation with an engineering design example and several analytical example problems shows promising results with convergence to the all-in-one (AIO) solution.

Commentary by Dr. Valentin Fuster
2009;():529-542. doi:10.1115/DETC2009-87410.

This paper proposes a hierarchical optimization-based approach for two-dimensional rectangular layout design problems. While decomposition-based optimization has been a key approach for the complicated design problems under the trend of multidisciplinary design optimization, it has focused on continuous ones. While various approaches for layout design have been developed, they are based on any evolutionary algorithm for effectively handling its combinatorial nature. This paper aims to bring a new paradigm by combining decomposition-based optimization and evolutionary algorithms toward solving complicated layout design problems. In the approach, the Pareto optimality of subsystem-level layout against the optimality of system-level layout is extracted through two-level hierarchical formulation. Then, a computational design algorithm is developed. It represents the layout topology with sequence-pair and the shape of each subsystem or component with the aspect ratio, and optimizes them with genetic algorithms. The Pareto optimality of sub-levels is handled with multi-objective genetic algorithms, in which a set of Pareto are simultaneously generated. Top-level and sub-level layout problems are coordinated through exchange of preferable ranges of shapes and layout. An implemented approach is applied to an example problem for demonstrating its performance and capability.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2009;():543-557. doi:10.1115/DETC2009-86268.

Solid freeform fabrication (SFF) processes based on mask image projection have the potential to be fast and inexpensive. More and more research and commercial systems have been developed based on these processes. For the SFF processes, the mask image planning is an important process planning step. In this paper, we present an optimization based method for mask image planning. It is based on a light intensity blending technique called pixel blending. By intelligently controlling pixels’ gray scale values, the SFF processes can achieve a much higher XY resolution and accordingly better part quality. We mathematically define the pixel blending problem and discuss its properties. Based on the formulation, we present several optimization models for solving the problem including a mixed integer programming model, a linear programming model, and a two-stage optimization model. Both simulated and physical experiments for various CAD models are presented to demonstrate the effectiveness and efficiency of our method.

Topics: Manufacturing , Masks
Commentary by Dr. Valentin Fuster
2009;():559-570. doi:10.1115/DETC2009-86752.

Companies applying mass customization paradigm regard the design process as a configuration task where the solution is achieved through the extraction of a new instance from a modular product structure. In this context product configuration management tools are evermore important. Although tools have been already proposed, they fail in real industrial contexts. Main causes are recognizable in high efforts in systems implementation and lack of flexibility in products updating. This research aims to develop an approach to overcome drawbacks and simplify the implementation and the use of product configuration systems also in redesign activities. The paper initially reviews existing systems in terms of design knowledge representation methods and product structure formalization techniques. Then, an approach based on Configuration Virtual Prototypes which store and manage different levels of knowledge, is presented. In particular, a framework is outlined in order to represent design data and its formalization in configuration tools. Three different domains are managed and connected via Configuration Virtual Prototypes: Product Specifications, Geometrical Data and Product Knowledge. Specifically, geometrical data aspects are analyzed in detail providing approaches for eliciting knowledge introduced by parametric template CAD models. The approach will be exemplified through a real application example where an original tool has been developed on the based of the described method. Benefits of the system will be shown and briefly discussed, in particular in terms of reachable flexibility in solutions.

Commentary by Dr. Valentin Fuster
2009;():571-578. doi:10.1115/DETC2009-86904.

Configurators have been generally accepted as important tools to interact with customers and elicit their requirements in the form of tangible product specification. These interactions, commonly called product configuring process, aim to find the best match between customers’ requirements and company’s offerings. Therefore an efficient configurator should take both product structure and customers’ preferences into consideration. In this paper, we present a novel iterative method of attributes selection for product configuring procedure. The algorithm is based on Shapley value, a concept used in game theory to estimate the usefulness of certain entities. It iteratively selects the most relevant attribute from the remaining attributes pool and proposes it for customers to configure. Thus it obtains customers’ specification in an adaptive manner in the sense that different customers may have different query sequences. Information content is used as the measure of usefulness. As a result, the most uncertainty can be eliminated and product development team has a better understanding of what customers want in a fix time horizon. Maximum a posteriori criterion is also exploited to give product recommendation based on the partially configured product configuration. Thus the customized 1-to-1 configuring procedure is presented and the recommendation can converge to a customer’s target with fewer interactions between the customers and designers. We also use a case of PC configurator to exemplify and test the viability of the presented method.

Commentary by Dr. Valentin Fuster
2009;():579-586. doi:10.1115/DETC2009-87335.

The issue of overlapping or intersection in the toolpath is an issue in the metal deposition as the material is continuously added to the previous layer. Using various control schemes can achieve a better quality. However, such schemes fail to delivery satisfactory results when there are different intersection angles in the deposition toolpath. In order to overcome the problem caused by the intersection angle, the effect of them on the deposition should be well studied. This paper discusses the effect of toolpath intersection angle using design of experiment method. The impact of various parameters in the deposition process is studied. The approach and method can be integrated with path planning for a metal deposition process.

Topics: Metals , Intersections
Commentary by Dr. Valentin Fuster
2009;():587-596. doi:10.1115/DETC2009-87370.

In regular 3 axis layered manufacturing processes, the build direction is fixed throughout the process. Multi-axis laser (more than 3-axis motion) deposition process, the orientation of the part can affect the non-support buildability in the multi-axis hybrid manufacturing process. However, each orientation that satisfies the buildability and other constraints may not be unique. In this case, the final optimal orientation is determined based on build time. The build time computation algorithm for multi-axis hybrid system is presented in this paper. To speed up the exhaustive search for the optimal orientation, a multi-stage algorithm is developed to reduce the search space.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2009;():597-604. doi:10.1115/DETC2009-86283.

The geometric variations in a tolerance-zone can be modeled with hypothetical point-spaces called Tolerance-Maps (T-Maps) for purposes of automating the assignment of tolerances during design. The objective of this paper is to extend this model to represent tolerances on circular runout which limit geometric manufacturing variations to a specified tolerance-zone. Such a zone is an annular area at one transverse cross-section for spherical, conical, or cylindrical objects (features), but it is a short cylinder when the feature is a round or annular segment of a plane. Depending on the kind of feature and the tolerances that are specified for it, the model may be used to represent variations within tolerance-zones for circular runout, size, position, orientation, and form. In this paper, the Tolerance-Map (T-Map) is a hypothetical volume of points that captures all the circular variations that can arise from these tolerances. The model is compatible with the ASME/ANSI/ISO Standards for geometric tolerances. T-Maps have been generated for other classes of geometric tolerances in which the variation of the feature are represented with a plane or line, and these have been incorporated into testbed software for aiding designers when assigning tolerances for assemblies. In this paper the T-Map for circular runout is created and, for the first time, circles are used to represent the geometric variations of a feature in tolerance-zones.

Commentary by Dr. Valentin Fuster
2009;():605-614. doi:10.1115/DETC2009-86322.

Integration of finite element analysis (FEA) into design is important for complex product development. However, how to automatically and robustly generate the analysis model from the design model, and how to organize information needed by CAD and FEA for the efficient integration is still a big problem. In this study, analysis feature model (AFM) is proposed and developed as a central model for design and analysis. Five kinds of information are contained in AFM which are analysis-related information, geometric-related information, element-related information, boundary condition and coupling information between features. A systematic approach is elaborated to automatically generate the AFM from the design model which mainly contains four steps: automated recognition of analysis features, decomposition of the design model, reduction and combination into AFM. The analysis model can be easily obtained from the AFM. At last the proposed method is implemented and some examples are given.

Commentary by Dr. Valentin Fuster
2009;():615-628. doi:10.1115/DETC2009-86739.

Current CAD systems provide utilities to position and orient parts with respect to each other in assemblies through associative relations that include geometric and parametric constraints. However, assembly features are not explicitly defined, nor available in CAD databases for exploitation in downstream applications. This paper examines the attributes of assembly features and proposes a template for their definition in a uniform way. The template can be used in conjunction with an EXPRESS-like language (N-Rep) to define assembly features that are implementation independent. The template includes slots for part features, mating relations, geometric, parametric, kinematic and structural relations. The current application for assembly features being explored is for Reverse Engineering of legacy parts. Assembly features serve as “knowledge containers” in that they allow one to encode form, fit and function in a uniform way so that replacement parts can be redesigned to meet key requirements.

Commentary by Dr. Valentin Fuster
2009;():629-635. doi:10.1115/DETC2009-86807.

A virtual plant, built in a computer by using computer graphic (CG) and Virtual Reality (VR), can model the precise and whole structure of an integrated manufacturing system and simulate its physical and logical behavior in operation. This paper aims to reveal the advanced modeling and VR realization methods in developing a virtual forging plant for automatic and programmed open-die forging processes. Two sub-models, component model and process model, compose the overall modeling architecture of a virtual integrated open-die forging plant. The coordinated motion simulation of the integrated system is realized through a kinematic modeling method. A compound stiffness modeling method is then developed to simulate the mechanical behavior in operation. The process simulation of the virtual plant is conducted on the basis of the above two modeling methods. A practical application example of virtual plant for integrated open-die forging process is presented towards the end.

Commentary by Dr. Valentin Fuster
2009;():637-649. doi:10.1115/DETC2009-87607.

The task of planning a path between two spatial configurations of an artifact moving among obstacles is an important problem in many geometrically-intensive applications. Despite the ubiquity of the problem, the existing approaches make specific limiting assumptions about the geometry and mobility of the obstacles, or those of the environment in which the motion of the artifact takes place. In this paper we propose a powerful approach for 2D path planning in a dynamic environment that can undergo drastic topological changes. Our algorithm is based on a potent paradigm for medial axis computation that relies on constructive representations of shapes with R-functions that operate on real-valued half-spaces as logic operations. Our approach can handle problems in which the environment is not fully known a priori, intrinsically supports local and parallel skeleton computations for domains with rigid or evolving boundaries, and appears to extend naturally to 3D domains. Furthermore, our path planning algorithm can be implemented in any commercial geometric kernel, and has attractive computational properties. The capability of the proposed technique are explored through several examples designed to resemble highly dynamic environments.

Topics: Path planning
Commentary by Dr. Valentin Fuster
2009;():651-660. doi:10.1115/DETC2009-86827.

Planning for Computerized Numerical Control (CNC) fabrication requires generation of process plans for the fabrication of parts that can be executed on CNC enabled machine tools. To create such plans, a large amount of domain specific knowledge is required to map the desired geometry of a part to a manufacturing process, thus decomposing design information into a set of feasible machining operations. Approaches to automate this planning process still rely heavily on human capabilities, such as planning and reasoning about geometry in relation to machining capabilities. In this paper, the authors present a new, shape grammar-based approach for automatically creating fabrication plans for CNC machining from a given part geometry. To avoid the use of static feature sets and their pre-defined mappings to machining operations, the method encodes knowledge of fundamental machine capabilities. A method for generating a vocabulary of removal volume shapes based on the available tool set and machine tool motions is defined in combination with a basic rule set for shape removal covering tool motion, removal volume calculation and CNC code generation. The use of shape grammars as a formalism enables systematic formulation of hard and soft constraints on spatial relations between the volume to be removed and the removal volume shape for a machining operation. The method is validated using an example of machining a simple part on a milling machine. Overall, the approach and method presented is an enabler for the creation of an autonomous fabrication system and CNC machine tools that are able to reason about part geometry in relation to available capabilities and carry out on-line planning for CNC fabrication.

Commentary by Dr. Valentin Fuster
2009;():661-670. doi:10.1115/DETC2009-87368.

The design language allows the construction of a variety of airplan designs. The syntax of the design language relies on the standardized Unified Modeling Language (UML) and consists of an object-oriented vocabulary (i.e. points, lines, profiles, wings, etc.) comparable to building blocks, and design rules (i.e. building laws) which represent the building knowledge used. In the terminology of graph-based design languages, the building blocks are the information objects which represent the static aspects of the design because they represent indivisible design entities. They are represented as UML classes and instances and their interrelation forms an object-oriented class hierarchy. The design rules represent the dynamic aspects of the design and express the building knowledge as stepwise activities. Finally, a production system (i.e. a specific rule set) is able to create an airplane geometry and generates design variants through manual modifications of the production system.

Topics: Design , Modeling , Aircraft
Commentary by Dr. Valentin Fuster
2009;():671-681. doi:10.1115/DETC2009-87402.

Hand-drawn sketches are powerful cognitive devices for the efficient exploration, visualization and communication of emerging ideas in engineering design. It is desirable that CAD/CAE tools be able to recognize the back-of-the-envelope sketches and extract the intended engineering models. Yet this is a nontrivial task for freehand sketches. Here we present a novel, neural network-based approach designed for the recognition of network-like sketches. Our approach leverages a trainable, detector/recognizer and an autonomous procedure for the generation of training samples. Prior to deployment, a Convolutional Neural Network is trained on a few labeled prototypical sketches and learns the definitions of the visual objects. When deployed, the trained network scans the input sketch at different resolutions with a fixed-size sliding window, detects instances of defined symbols and outputs an engineering model. We demonstrate the effectiveness of the proposed approach in different engineering domains with different types of sketching inputs.

Commentary by Dr. Valentin Fuster
2009;():683-691. doi:10.1115/DETC2009-87477.

In current product design, significant effort is put into creating aesthetically pleasing product forms. Often times, the final shape evolves in time based on designers’ ideas externalized through early design activities primarily involving conceptual sketches. While designers negotiate and convey a multitude of different ideas through such informal activities, current computational tools are not well suited to work from such forms of information to leverage downstream design processes. As a result, many promising ideas either remain under-explored, or require restrictive added effort to be transformed into digital media. As one step toward alleviating this difficulty, we propose a new computational method for capturing and reusing knowledge regarding the shape of a developing design from designers’ hand-drawn conceptual sketches. At the heart of our approach is a geometric learning method that involves constructing a continuous space of meaningful shapes via a deformation analysis of the constituent exemplars. The computed design space serves as a medium for encoding designers’ shape preferences expressed through their sketches. With the proposed approach, designers can record desirable shape ideas in the form of raw sketches, while utilizing the accumulated information to create and explore novel shapes in the future. A key advantage of the proposed system is that it enables prescribed engineering and ergonomic criteria to be concurrently considered with form design, thus allowing such information to suitably guide conceptual design processes in a timely manner.

Topics: Design , Optimization , Shapes
Commentary by Dr. Valentin Fuster
2009;():693-698. doi:10.1115/DETC2009-87829.

This paper describes a method for using text description as a natural interface to construct models of mechanical systems. The goal is to convert the natural text description from the user to a system of equations that can be used to model and simulate the system. The algorithm of this process consists of three main stages: 1-Component Extraction, 2-Interaction detection and 3-Attribute detection. The description is first parsed to identify and instantiate components. The text is then scanned again to analyze the action verbs to identify component attributes and to detect connections between the components. Finally, numerical values and initial conditions are associated with the attributes. System equations are generated by detecting loops in the device description. The knowledge is gradually built up through progressive scanning and analysis of text. The paper describes the algorithms, presents a detailed example and identifies current assumptions and restrictions.

Commentary by Dr. Valentin Fuster
2009;():699-709. doi:10.1115/DETC2009-86270.

Although many metamodeling methods have been developed in the past decades to model the relationships between input and output parameters, selection of an appropriate or the optimal metamodel for solving a certain engineering problem is not a trivial task. Various performance measures of different metamodels are strongly influenced by the characteristics of sample data. This research focuses on the study of the relationships between sample data characteristics and metamodel performance measures considering different types of metamodeling methods. In this research, sample quality merits are introduced to quantitatively model the characteristics of sample data. In this work, four types of metamodeling methods, including multivariate polynomial model, radial basis function model, kriging model and Bayesian neural network model, three sample quality merits, including sample size, uniformity and noise, and four performance evaluation measures, including accuracy, confidence, robustness and efficiency, are considered to study the relationships between the sample quality merits and the performance measures of the metamodeling methods.

Commentary by Dr. Valentin Fuster
2009;():711-726. doi:10.1115/DETC2009-86407.

This paper presents a Response Surface Modeling (RSM) approach for solving the engine mount optimization problem for a motorcycle application. A theoretical model that captures the structural dynamics of a motorcycle engine mount system is first used to build the response surface model. The response surface model is then used to solve the engine mount optimization problem for enhanced vibration isolation. Design of Experiments (DOE), full factorial and fractional factorial formulations, are used to construct the governing experiments. Normal probability plots are used to determine the statistical significance of the variables and the significant variables are then used to build the response surface. The design variables for the engine mount optimization problem include mount stiffness, position vectors and orientation vectors. It is seen that RSM leads to a substantial reduction in computational effort and yields a simplified input-output relationship between the variables of interest. However, as the number of design variables increases and as the response becomes irregular, conventional use of RSM is not viable. Two algorithms are proposed in this paper to overcome the issues associated with the size of the governing experiments and problems associated with modeling of the orientation variables. The proposed algorithms divide the design space into sub-regions in order to manage the size of the governing experiments without significant confounding of variables. An iterative procedure is used to overcome high response irregularity in the design space, particularly due to orientation variables.

Commentary by Dr. Valentin Fuster
2009;():727-740. doi:10.1115/DETC2009-86531.

Modeling or approximating high dimensional, computationally-expensive, black-box problems faces an exponentially increasing difficulty, the “curse-of-dimensionality”. This paper proposes a new form of high-dimensional model representation (HDMR) by integrating the radial basis function (RBF). The developed model, called RBF-HDMR, naturally explores and exploits the linearity/nonlinearity and correlation relationships among variables of the underlying function that is unknown or computationally expensive. This work also derives a lemma that supports the divide-and-conquer and adaptive modeling strategy of RBF-HDMR. RBF-HDMR circumvents or alleviates the “curse-of-dimensionality” by means of its explicit hierarchical structure, adaptive modeling strategy tailored to inherent variable relation, sample reuse, and a divide-and-conquer space-filling sampling algorithm. Multiple mathematical examples of a wide scope of dimensionalities are given to illustrate the modeling principle, procedure, efficiency, and accuracy of RBF-HDMR.

Commentary by Dr. Valentin Fuster
2009;():741-750. doi:10.1115/DETC2009-87053.

The use of surrogates for facilitating optimization and statistical analysis of computationally expensive simulations has become commonplace. Usually, surrogate models are fit to be unbiased (i.e., the error expectation is zero). However, in certain applications, it might be interesting to safely estimate the response (e.g., in structural analysis, the maximum stress must not be underestimated in order to avoid failure). In this work we use safety margins to conservatively compensate for fitting errors associated with surrogates. We propose the use of cross-validation for estimating the required safety margin for a given desired level of conservativeness (percentage of safe predictions). We also check how well we can minimize the losses in accuracy associated with conservative predictor by selecting between alternate surrogates. The approach was tested on two algebraic examples for ten basic surrogates including different instances of kriging, polynomial response surface, radial basis neural networks and support vector regression surrogates. For these examples we found that cross-validation (i) is effective for selecting the safety margin; and (ii) allows us to select a surrogate with the best compromise between conservativeness and loss of accuracy. We then applied the approach to the probabilistic design optimization of a cryogenic tank. This design under uncertainty example showed that the approach can be successfully used in real world applications.

Topics: Safety , Design
Commentary by Dr. Valentin Fuster
2009;():751-765. doi:10.1115/DETC2009-87121.

Metamodeling techniques are increasingly used in solving computation intensive design optimization problems today. In this work, the issue of automatic identification of appropriate metamodeling techniques in global optimization is addressed. A generic, new hybrid metamodel based global optimization method, particularly suitable for design problems involving computation intensive, black-box analyses and simulations, is introduced. The method employs three representative metamodels concurrently in the search process and selects sample data points adaptively according to the values calculated using the three metamodels to improve the accuracy of modeling. The global optimum is identified when the metamodels become reasonably accurate. The new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization problem involving vehicle crash simulation, to demonstrate the superior performance of the new algorithm over existing search methods. Present limitations of the proposed method are also discussed.

Topics: Optimization
Commentary by Dr. Valentin Fuster
2009;():767-775. doi:10.1115/DETC2009-87279.

Many meta-models have been developed to approximate true responses. These meta-models are often used for optimization instead of computer simulations which require high computational cost. However, designers do not know which meta-model is the best one in advance because the accuracy of each meta-model becomes different from problem to problem. To address this difficulty, research on the ensemble of meta-models that combines stand-alone meta-models has recently been pursued with the expectation of improving the prediction accuracy. In this study, we propose a selection method of weight factors for the ensemble of meta-models based on v-nearest neighbors’ cross-validation error (CV). The four stand-alone meta-models we employed in this study are polynomial regression, Kriging, radial basis function, and support vector regression. Each method is applied to five 1-D mathematical examples and ten 2-D mathematical examples. The prediction accuracy of each stand-alone meta-model and the existing ensemble of meta-models is compared. Ensemble of meta-models shows higher accuracy than the worst stand-alone model among the four stand-alone meta-models at all test examples (30 cases). In addition, the ensemble of meta-models shows the highest accuracy for the 5 test cases. Although it has lower accuracy than the best stand-alone meta-model, it has almost same RMSE values (less than 1.1) as the best standalone model in 16 out of 30 test cases. From the results, we can conclude that proposed method is effective and robust.

Topics: Errors
Commentary by Dr. Valentin Fuster
2009;():777-787. doi:10.1115/DETC2009-87488.

Similarity methods have been widely employed in engineering design and analysis to model and scale complex systems. The Empirical Similitude Method (ESM) is one such method based on the use of experimental data. Using a variant of the similitude process involving experimental data, we present in this paper, the use of advanced numerical approximations, trigonometric functions in particular to model and predict the performance of design artifacts. Specifically, an airfoil design is modeled, and the values of the drag coefficient are estimated based on the advanced ESM. Intermediate test specimens are used to correlate experimental data to produce the required prediction parameters. Mathematical development and error analysis are also elaborated by delving into continuity and adaptivity features of numerical algorithms.

Topics: Functions
Commentary by Dr. Valentin Fuster
2009;():789-804. doi:10.1115/DETC2009-87616.

Metamodeling techniques are now being widely used by many industries to replace complex and expensive simulation models so that optimization and probabilistic design studies can be done in a more practical and affordable way. Due to the complexity of many engineering design systems and to the lack of a deep understanding of the metamodeling methods by engineers, many questions related to metamodeling accuracy, confidence, robustness, efficiency, etc. are now frequently asked. The need to establish comprehensive guidelines for engineers to correctly and efficiently apply metamodeling methods to their optimization and probabilistic design tasks is becoming more and more important. Based upon experiences and lessons learned at General Electric in recent years, this paper discusses important metamodeling mathematical details and addresses several common issues engineers are likely to encounter when applying metamodeling techniques to realistic engineering problems. The paper provides detailed guidelines on the best practices of metamodel creation and its application to the design process. Many results from benchmark examples and real applications are included to justify certain guidelines and rules of thumb.

Commentary by Dr. Valentin Fuster
2009;():805-813. doi:10.1115/DETC2009-87730.

A numerical study for a functional design of honeycomb meta-materials targeting flexible shear properties (about 6.5MPa effective shear modulus and 15% maximum effective shear strain) is conducted with two material selections — polycarbonate (PC) and mild-steel (MS), and five honeycomb configurations. Cell wall thicknesses are found for each material to reach the target shear modulus for available cell heights with five honeycomb configurations. PC honeycomb structures can be tailored with 0.4 to 1.3mm cell wall thicknesses to attain the 6.5MPa shear modulus. MS honeycombs can be built with 0.2mm or lower wall thicknesses to reach the target shear modulus. Sensitivity of wall thickness on effective properties may be a hurdle to overcome when designing metallic honeycombs. The sensitivity appears to be more significant with an increased number of unit cells in the vertical direction. PC auxetic honeycombs having 0.4 to 1.9 mm cell wall thicknesses show 15% maximum effective shear strain without local cell damage. Auxetic honeycombs having negative Poisson’s ratio show lower effective shear moduli and higher maximum effective shear strains than the regular counterparts, implying that auxetic honeycombs are candidate geometries for a shear flexure design.

Commentary by Dr. Valentin Fuster
2009;():815-826. doi:10.1115/DETC2009-86259.

A new compliant parallel micromanipulator is proposed in this paper. The manipulator has three degrees of freedom (DOF) and can generate motions in a microscopic scale. It can be used for biomedical engineering and fiber optics industry. In the paper, the detailed design of the structure is first introduced, followed by the kinematic analysis and performance evaluation. Second, a finite-element analysis of resultant stress, strain, and deformations is evaluated based upon different inputs of the three piezoelectric actuators. Finally, the genetic algorithms and radial basis function networks are implemented to search for the optimal architecture and behavior parameters in terms of global stiffness, dexterity and manipulability.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():827-839. doi:10.1115/DETC2009-86282.

Sensitivity analysis has received significant attention in engineering design. While sensitivity analysis methods can be global, taking into account all variations, or local, taking into account small variations, they generally identify which uncertain parameters are most important and to what extent their effect might be on design performance. The extant methods do not, in general, tackle the question of which ranges of parameter uncertainty are most important or how to best allocate investments to partial uncertainty reduction in parameters under a limited budget. More specifically, no previous approach has been reported that can handle single-disciplinary multi-output global sensitivity analysis for both a single design and multiple designs under interval uncertainty. Two new global uncertainty metrics, i.e., radius of output sensitivity region and multi-output entropy performance, are presented. With these metrics, a multi-objective optimization model is developed and solved to obtain fractional levels of parameter uncertainty reduction that provide the greatest payoff in system performance for the least amount of “investment”. Two case studies of varying difficulty are presented to demonstrate the applicability of the proposed approach.

Commentary by Dr. Valentin Fuster
2009;():841-852. doi:10.1115/DETC2009-87127.

Uncertainty in the input parameters to an engineering system may not only degrade the system’s performance, but may also cause failure or infeasibility. This paper presents a new sensitivity analysis based approach called Design Improvement by Sensitivity Analysis (DISA). DISA analyzes the interval parameter uncertainty of a system and, using multi-objective optimization, determines an optimal combination of design improvements required to enhance performance and ensure feasibility. This is accomplished by providing a designer with options for both uncertainty reduction and, more importantly, slight design adjustments. The approach can provide improvements to a design of interest that will ensure a minimal amount of variation in the objective functions of the system while also ensuring the engineering feasibility of the system. A two stage sequential framework is used in order to effectively employ metamodeling techniques to approximate the analysis function of an engineering system and greatly increase the computational efficiency of the approach. This new approach has been applied to two engineering examples of varying difficulty to demonstrate its applicability and effectiveness.

Commentary by Dr. Valentin Fuster
2009;():853-862. doi:10.1115/DETC2009-87736.

Over the last few years, research activity in approximation (e.g. metamodels) and optimization (e.g. genetic algorithms) methods has improved upon current practices in engineering design and optimization of complex systems with respect to multiple performance metrics, by reducing the number of evaluations of the system’s model that are needed to obtain the set of non-dominated solutions to a given multi-objetive optimal design problem. To this end, several authors have proposed to enhance Multi-Objective Genetic Algorithms (MOGAs) with metamodel-based pre-screening criteria (PSC), so that only those solutions that have the most potential to improve the current approximation of the Pareto Front are evaluated with the (costly) system model. The main goals of this work are to compare the performance of several PSC with an array of test functions taken from the literature, and to study the potential effect on their effectiveness and efficiency of using multi-response metamodels, instead of building independent, individual metamodels for each objective function, as has been done in previous work. Our preliminary results show that no single PSC is observed to be superior overall, though the Minimum of Minimum Distances and Expected Improvement criteria outperformed other PSC in most cases. Results also show that the use of multi-response metamodels improved both the effectiveness and efficiency of PSC and the quality of solution at the end of the optimization in 50% to 60% of test cases.

Commentary by Dr. Valentin Fuster
2009;():863-869. doi:10.1115/DETC2009-86511.

Many studies that examine the impact of renewable energy installations on avoided carbon-dioxide utilize national, regional or state averages to determine the predicted carbon-dioxide offset. The approach of this computational study was to implement a dispatching strategy in order to determine precisely which electrical facilities would be avoided due to the installation of renewable energy technologies. This study focused on a single geographic location for renewable technology installation, San Antonio, Texas. The results indicate an important difference between calculating avoided carbon-dioxide when using simple average rates of carbon-dioxide emissions and a dispatching strategy that accounts for the specific electrical plants used to meet electrical demands. The avoided carbon-dioxide due to renewable energy technologies is overestimated when using national, regional and state averages. This occurs because these averages include the carbon-dioxide emission factors of electrical generating assets that are not likely to be displaced by the renewable technology installation. The study also provides a comparison of two specific renewable energy technologies: photovoltaics (PV) and wind turbines. The results suggest that investment in PV is more cost effective for the San Antonio location. While the results are only applicable to this location, the methodology is useful for evaluating renewable technologies at any location.

Commentary by Dr. Valentin Fuster
2009;():871-880. doi:10.1115/DETC2009-86594.

This paper proposes a structural optimization-based method for the design of compliant mechanism scissors in which the proposed design criteria are based on universal design principles. The first design criterion is the distance from the hand-grip to the center of gravity of the scissors, which should be minimized to reduce the physical effort required of the people using the device. The second design criterion is that of failure tolerance, where the effects of traction applied in undesirable directions upon the performance of the compliant mechanism should be minimized. Based on the proposed design criteria, a multiobjective optimization problem for the universal design of a compliant mechanism scissors is formulated. Furthermore, to obtain an optimal configuration, a new type of topology optimization technique using the level set function to represent structural boundaries is employed. This optimization technique enables rapid verification of resulting design configurations since the boundary shapes of the obtained design solution candidates can be easily converted to finite element models which are then used in large deformation analyses. Finally, the proposed design method is applied to design examples. The optimal configurations obtained by the proposed method provide good universal design performance, indicating the effectiveness and usefulness of the proposed method.

Commentary by Dr. Valentin Fuster
2009;():881-889. doi:10.1115/DETC2009-86810.

This paper proposes a new integrated optimization of a functional structure and a components layout for supporting a conceptual design. A conceptual design is the second phase of a product development, where designers build up the functional structure and the components layout of the target design object as the design concept. However, a functional design and a layout design are very different tasks, there is a lot of flexibility for decision makings during them and its solution space is vast. Therefore, it is extremely difficult for designers to build up an optimal design concept by considering various design requirements themselves. To overcome this limitation, this paper develops a new design method based on optimization techniques. This method consists of two optimizations, a functional optimization and a layout optimization, and obtains the optimal solutions by cooperatively executing two optimizations. Specifically, a functional optimization based on a GA is the main part of the proposed method and executed just one time whereas a layout optimization is executed to calculate the layout with minimum area (or volume) for each design solution for each iteration during the process of the functional optimization and the result is used as one of its valuation characteristics. Using the proposed method, designers can simultaneously obtain both the functional structure and the components layout of the target design object that satisfies performance, cost and area at a high level. To demonstrate the flow of the proposed method and confirm its effectiveness, this paper describes the case study, where internal devices of a personal computer are designed using the proposed method.

Commentary by Dr. Valentin Fuster
2009;():891-898. doi:10.1115/DETC2009-86882.

Mechanical components are cleaned after manufacturing processes like casting and machining using high pressure waterjets that help in both dislodging and removal of contaminants. In order to understand the dynamic relationships that exist in an actual cleaning process, it is essential to visualize the interaction of the water-jet with the part geometry. To aid in this, we have developed a simplified model of the water-mill that simulates the cleaning process on parts represented using standard CAD geometries. Our model of the cleaning process approximates the water-jets in a water-mill as a set of rays originating from the nozzles and evaluates the pressure the water-jet exerts when it hits the surface of the part. This model can be used to understand the effect of kinematic parameters like nozzle diameter and standoff distance on cleaning. Furthermore, this model can be used to optimize these parameters. Any standard optimization technique can be used for this optimization. As proof of concept, we used a Genetic Algorithm (GA) to optimize the process parameters for the simplified cleaning process on a flat plate and a curved surface. Analysis of the results indicates that the obtained solution is theoretically an optimum.

Commentary by Dr. Valentin Fuster
2009;():899-908. doi:10.1115/DETC2009-87558.

Plug-in hybrid electric vehicle (PHEVs) technology has the potential to address economic, environmental, and national security concerns in the United States by reducing operating cost, greenhouse gas (GHG) emissions and petroleum consumption. However, the net implications of PHEVs depend critically on the distances they are driven between charges: Urban drivers with short commutes who can charge frequently may benefit economically from PHEVs while also reducing fuel consumption and GHG emissions, but drivers who cannot charge frequently are unlikely to make up the cost of large PHEV battery packs with future fuel cost savings. We construct an optimization model to determine the optimal PHEV design and optimal allocation of PHEVs, hybrid-electric vehicles (HEVs) and conventional vehicles (CVs) to drivers in order to minimize net cost, fuel consumption, and GHG emissions. We use data from the 2001 National Household Transportation Survey to estimate the distribution of distance driven per day across vehicles. We find that (1) minimum fuel consumption is achieved by assigning large capacity PHEVs to all drivers; (2) minimum cost is achieved by assigning small capacity PHEVs to all drivers; and (3) minimum greenhouse gas emissions is achieved by assigning medium-capacity PHEVs to drivers who can charge frequently and large-capacity PHEVs to drivers who charge less frequently.

Commentary by Dr. Valentin Fuster
2009;():909-918. doi:10.1115/DETC2009-86263.

One-of-a-kind production (OKP) is a new manufacturing paradigm to produce customized products based on requirements of individual customers while maintaining the quality and efficiency of mass production. In this research, a customer-centric product modeling scheme is introduced to model OKP product families by incorporating the customer information. To develop this modeling scheme, data mining techniques, including fuzzy pattern clustering method, and hybrid attribute reduction method, are employed to achieve the knowledge from the historical data. Based on the achieved knowledge, the different patterns of OKP products are modeled by different sub-AND-OR trees trimmed from the original AND-OR tree. Since only partial product descriptions in a product family are used to identify the optimal custom product based on customer requirements, the efficiency of custom product identification process can be improved considerably. A case study to identify the optimal configuration and parameters of window products in an industrial company is used to demonstrate the effectiveness of the introduced approach.

Topics: Modeling
Commentary by Dr. Valentin Fuster
2009;():919-926. doi:10.1115/DETC2009-86422.

Manufacturing firms use product families to provide variety while maintaining economies of scale to improve manufacturing productivity. Designing a successful product family requires consideration of both customer preferences and the competition. This paper presents a design for market systems approach to product family design and solves the problem of designing a product family when the competition is simultaneously designing its product family. In particular, the problem is formulated as a two-player zero-sum game. Our analysis of this problem shows that it can be separated into multiple subproblems whose solution provides an optimal solution to the original problem. The paper presents an example to illustrate the approach.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():927-939. doi:10.1115/DETC2009-86613.

In this work, a methodology and an integrated tool framework has been developed for automated design of an industrial robot family consisting of four robot members. For each robot, performance requirements concerning payloads, reaches, and time performances are specified. A 3D design tool, namely SolidWorks, has been integrated with robot kinematics and dynamics simulation tools for simultaneous kinematics and dynamics design. A motor library comprising both geometric data and physical data has also been integrated in the tool framework. The automated design of the robot family has been formulated as a multi-objective and mixed variable design optimization problem. The arm modules are treated as continuous design variables while the motors are treated as discrete variables. Due to the characteristics of this mixed variable design optimization problem a genetic algorithm (GA) has been used. This work has successfully demonstrated the feasibility for achieving automatic design of an industrial robot family.

Topics: Robots , Design
Commentary by Dr. Valentin Fuster
2009;():941-950. doi:10.1115/DETC2009-86784.

Innovative companies that generate a variety of products and services for satisfying customers’ specific needs are invoking and increasing research on mass-customized products, but the majority of their efforts are still focused on general consumers who are without disabilities. This research is motivated by the need to provide a basis of universal design guidelines and methods, primarily because of a lack of knowledge on disabilities in product design as well as methods for designing and evaluating products for everyone. Product family design is a way to achieve cost-effective mass customization by allowing highly differentiated products to be developed from a common platform while targeting products to distinct market segments. By extending concepts from product family design and mass customization to universal design, we propose a method for developing a universal product family to generate economical feasible design concepts and evaluating design feasibility with respect to disabilities within dynamic market environments. We will model design strategies for a universal product family as a market economy where functional module configurations are generated through market segments based on a product platform. A coalitional game is employed to model module sharing situations regarding dynamic market environments and decides which functional modules provide more benefit when in the platform based on the marginal contribution of each module. To demonstrate implementation of the proposed method, we use a case study involving a family of mobile phones.

Commentary by Dr. Valentin Fuster
2009;():951-964. doi:10.1115/DETC2009-87118.

The design of a product determines the flexibility of that product for future evolutions, which may arise from a variety of change modes such as new market needs or technological change. The energy, material, and information exchanged between components of a product along with the spatial relationships and movement between those components all influence the ability of that product’s design to be evolved to meet the new requirements of a future generation. Previous work has produced a set of guidelines for product flexibility for future evolution that have been shown to improve the ability of a design to be adapted when new needs arise. Although these guidelines are conceptually easy to understand, it is difficult to assess the extent to which a product follows the guidelines. This paper presents a systematic method to analyze the flexibility for future evolution of products based on selected guidelines. The High-Definition Design Structure Matrix is presented as a product representation model which captures sufficient interaction information to highlight potential design improvements based on the aforementioned guidelines. An interaction basis is used to facilitate the consistency and comparison of HD-DSM models created by different examiners and/or for different systems. The selected guidelines are interpreted in terms of the HD-DSM by creating analysis processes that relate to the characteristics described by the guideline. Two similar power screwdrivers are compared for flexibility for future evolution based on a quantitative analysis of their respective HD-DSMs.

Topics: Plasticity , Design
Commentary by Dr. Valentin Fuster
2009;():965-972. doi:10.1115/DETC2009-87122.

In this paper, an innovative method is presented which uses the properties of atomic theory to solve design modularization problems for product design. With the developed method, products can be modularized based upon different given constraints, e.g., material compatibility, part recyclability, and part disassemblability. The developed method can help engineers effectively create modular designs in the initial design stage, based upon different design requirements. With design considerations incorporated into new modules, a new design can be created which improves upon an original design, with respect to design requirements.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():973-985. doi:10.1115/DETC2009-87304.

Current market place is highly competitive and frequently changing, to survive companies need to quickly respond to the customers’ requirements. This challenging situation demands a robust platform design and development process to produce variety of products in the shortest possible time. The common components for a set of similar products under a family can be grouped into a common platform. Development of product platform requires measuring the similarity among a set of products. This paper presents an approach to measure the similarity among a set of CAD models of products to develop a common product platform. The measured similarity of geometries can allow designers to identify components that have the potential to be included in the common platform. The degree of similarity is determined by extracting the information and developing a suitable commonality index for a set of CAD models. The commonality index values are then used to determine the common platform for a set of assembly products by developing and calculating the Average Assembly Platform index value. The overall approach is followed by two case studies: Cell Phone casing models and Vacuum Cleaner models.

Commentary by Dr. Valentin Fuster
2009;():987-996. doi:10.1115/DETC2009-87486.

Product portfolios need to present the widest coverage of user requirements with minimal product diversity. User requirements may vary along multiple performance measures, comprising the objective space, whereas the design variables constitute the design space, which is usually far higher in dimensionality. Here we consider the set of possible performances of interest to the user, and use multi-objective optimization to identify the non-domination or the pareto-front. The designs lying along this front are mapped to the design space; we show that these “good designs” are often restricted to a much lower-dimensional manifold, resulting in significant conceptual and computational efficiency. These non-dominated designs are then clustered in the design space in an unsupervised manner to obtain candidate product groupings which the designer may inspect to arrive at portfolio decisions. With help of dimensionality reduction techniques, we show how these clusters in low-dimensional manifolds embedded in the high-dimensional design space. We demonstrate this process on two different designs (springs and electric motors), involving both continuous and discrete design variables.

Topics: Space , Manifolds
Commentary by Dr. Valentin Fuster
2009;():997-1008. doi:10.1115/DETC2009-87517.

In distributed design individual designers have local control over design variables and seek to minimize their own individual objectives. The amount of time required to reach equilibrium solutions in decentralized design can vary based on the design process architecture chosen. There are two primary design process architectures, sequential and parallel, and a number of possible combinations of these architectures. In this paper a game theoretic approach is developed to determine the time required for a parallel and sequential architecture to converge to a solution for a two designer case. The equations derived solve for the time required to converge to a solution in closed form without any objective function evaluations. This result is validated by analyzing a distributed design case study. In this study the equations accurately predict the convergence time for a sequential and parallel architecture. A second validation is performed by analyzing a large number of randomly generated two designer systems. The approach in this case successfully predicts convergence within 3 iterations for nearly 98% of the systems analyzed. The remaining 2% highlight one of the approach’s weaknesses; it is susceptible to numerically ill conditioned problems. Understanding the rate at which distributed design problems converge is of key importance when determining design architectures. This work begins the investigation with a two designer case and lays the groundwork to expand to larger design systems with multiple design variables.

Topics: Design
Commentary by Dr. Valentin Fuster
2009;():1009-1018. doi:10.1115/DETC2009-86673.

One of the critical situations facing the society across the globe is the problem of elderly homecare services (EHS) due to the aggravation of the society coupled with diseases and limited social resources. This problem has been typically dealt with by manual assistance from caregivers and/or family members. The emerging Ambience Intelligence (AmI) technology suggests itself to be of great potential for EHS applications, owing to its strength in constructing a pervasive computing environment that is sensitive and responsive to the presence of human users. The key challenge of AmI implementation lies in context awareness, namely how to align with the specific decision making scenarios of particular EHS applications. This paper proposes a context-aware information model in a smart home to tackle the EHS problem. Mainly, rough set theory is applied to construct user activity models for recognizing various activities of daily living (ADLs) based on the sensor platform constructed in a smart home environment. Subsequently, issues of case comprehension and homecare services are also discussed. A case study in the smart home environment is presented. Initial findings from the case study suggest the importance of the research problem, as well as the feasibility and potential of the proposed framework.

Commentary by Dr. Valentin Fuster
2009;():1019-1027. doi:10.1115/DETC2009-87280.

Products are often paired with additional services to satisfy customers’ needs, differentiate product offerings, and remain competitive in today’s market. This research is motivated by the need to provide guidelines and methods to support the design of such services, addressing the lack of knowledge on customized service design as well as methods for designing and evaluating services for mass customization. We extend concepts from module-based product family design to create a method for designing families of services. In particular, we introduce a strategic platform design method for developing customized families of services using game theory to model situations involving dynamic market environments. A module-based service model is proposed to facilitate customized service design and represent the relationships between functions and processes that constitute a service offering. A module selection problem for platform design is considered as a strategic module sharing problem under collaboration, and we use a coalitional game to model module sharing and decide which modules provide more benefit when in the platform based on marginal contribution of each module. To demonstrate implementation of the proposed method, we use a case study involving a family of banking services.

Commentary by Dr. Valentin Fuster
2009;():1029-1036. doi:10.1115/DETC2009-87444.

One goal of Designing for Human Variability (DfHV) is to optimize the interaction between user and device. Often, this interaction is dictated by the spatial dimensions or shape of the artifacts with which people interact. A novel approach that applies DfHV principles including virtual fitting trials to optimize the shape of an artifact is presented and applied to the design of a tool handle. By breaking the problem apart into discrete blocks, called the hand model and tool model, application of standard optimization techniques is facilitated. The benefits of the approach include the ability to consider handles with variable cross-sections and to systematically consider the effects of multiple sizes. The methodology presented here is configurable for any given population and may be applied to other DfHV design problems.

Topics: Optimization , Shapes
Commentary by Dr. Valentin Fuster
2009;():1037-1045. doi:10.1115/DETC2009-87698.

Service design has been generally discussed in the engineering field in recent years. Many manufacturers have been focusing more on services than on products themselves. To ensure the feasibility of designed services, a service designer should consider not only customer values but also the requirements of a service provider. However, there are few standard methods to deal with the service provider’s requirements and to reflect them in the service design. In this research, the authors suggest a method to describe service provider’s requirements for the service design based on the Service Engineering methodology. In addition, the authors propose a design process to analyze service providers’ requirements and adjust the specifications of a designed service in order to fulfill the requirements of both service providers and service receivers simultaneously.

Commentary by Dr. Valentin Fuster
2009;():1047-1056. doi:10.1115/DETC2009-86183.

In this work we extend a filter-based sequential quadratic programming (SQP) algorithm to solve reliability-based design optimization (RBDO) problems with highly nonlinear constraints. This filter-based SQP uses the approach of average importance sampling (AAIS) in calculating the values and the gradients of probabilistic constraints. AAIS allocates samples at the limit state boundaries such that relatively few samples are required in calculating constraint probability values to achieve high accuracy and low variance. The accuracy of probabilistic constraint gradients using AAIS is improved by a sample filter to eliminate sample outliers that have low probability of occurrence and high gradient values. To ensure convergence, this algorithm replaces the penalty function by an iteration filter to avoid the ill-conditioning problems of the penalty parameters in the acceptance of a design update. A sample-reuse mechanism is introduced to improve the efficiency of the algorithm by avoiding redundant samples. ‘Unsampled’ region, the region that is not covered by previous samples, is identified by the iteration step lengths, the trust region, and constraint reliability levels. As a result, this filter-based sampling SQP can efficiently handle highly nonlinear probabilistic constraints with multiple most probable points or functions without analytical forms. Several examples are demonstrated and compared with FORM/SORM and Monte Carlo simulation. Results show that by integrating the modified AAIS with the filter-based SQP, overall computation cost can be significantly improved in solving RBDO problems.

Topics: Optimization , Filters
Commentary by Dr. Valentin Fuster
2009;():1057-1066. doi:10.1115/DETC2009-86190.

An important part in the efficient and robust design of turbine blades is to capture the details of any manufacturing uncertainty. However, the data available detailing the manufacturing uncertainty inevitably contains variability due to inherent errors in any measurement process. The presented work proposes a methodology that employs existing probabilistic data analysis techniques, namely, Principal Component Analysis (PCA), Multivariate Analysis of Variance (MANOVA) and Fast Fourier Transform (FFT) analysis for separation of the measurement error from measurement data to obtain the underlying manufacturing uncertainty. This manufacturing uncertainty is further segregated in terms of the manufacturing uncertainty with time and the blade to blade manufacturing error. A method for dimensionality reduction is employed which utilizes prior information available on the variance of the measurement error for each measurement location. The application of the proposed methodology leads to reconstruction of new datasets that may be used for generating 3-d models of the manufactured blade shapes. These 3-d models may then be used further for Finite Element Analysis (FEA) in standard FEA tools.

Commentary by Dr. Valentin Fuster
2009;():1067-1079. doi:10.1115/DETC2009-86236.

Velocity and acceleration analysis is an important tool for predicting the motion of mechanisms. The results, however, may be inaccurate when applied to manufactured products, due to the process variations which occur in production. Small changes in dimensions can accumulate and propagate in an assembly, which may cause significant variation in critical kinematic performance parameters. A new statistical analysis tool is presented for predicting the effects of variation on mechanism kinematic performance. It is based on the Direct Linearization Method developed for static assemblies. The solution is closed form, and may be applied to 2-D, open or closed, multi-loop mechanisms, employing common kinematic joints. It is also shown how form, orientation, and position variations may be included in the analysis to analyze variations that occur in kinematic joints. Closed form solutions eliminate the need of generating a large set of random assemblies, and analyzing them one-by one, to determine the expected range of critical variables. Only two assemblies are analyzed to characterize the entire population. The first determines the performance of the mean, or average assembly, and the second estimates the range of variation about the mean. The system is computationally efficient and well suited for design iteration.

Topics: Mechanisms
Commentary by Dr. Valentin Fuster
2009;():1081-1091. doi:10.1115/DETC2009-86454.

In this work the robustness of residual stresses in finite element simulations with respect to deviations in mechanical parameters in castings is evaluated. Young’s modulus, the thermal expansion coefficient and the hardening are the studied parameters. A 2D finite element model of a stress lattice is used. The robustness is evaluated by comparing purely finite element based Monte Carlo simulations and Monte Carlo simulations based on linear and quadratic response surfaces. Young’s modulus, the thermal expansion coefficient and the hardening are assumed to be normal distributed with a standard deviation that is 10% of their nominal value at different temperatures. In this work an improved process window is also suggested to show the robustness graphically. By using this window it is concluded that least robustness is obtained for high hardening values in combination to deviations in Young’s modulus and the thermal expansion coefficient. It is also concluded that quadratic response surface based Monte Carlo simulations substitute finite element based Monte Carlo simulations satisfactory. Furthermore, the standard deviation of the responses are evaluated analytically by using the Gauss formula, and are compared to results from Monte Carlo simulations. The analytical solutions are accurate as long as the Gauss formula is not utilized close to a stationary point.

Commentary by Dr. Valentin Fuster
2009;():1093-1104. doi:10.1115/DETC2009-86473.

In the engineering design community, decision making methodologies to select the “best” design from among feasible designs is one of the most critical part of the design process. As the design models become increasingly realistic, the decision making methodology becomes increasingly complex. That is, because of the realistic design models, more and more decisions are made under uncertain environments without making any unrealistic assumptions. A decision maker is usually forced to work with uncertainties of which some stochastic information is known (aleatory) or no information is known (epistemic). In this paper, we discuss both forms of uncertainties and their modeling methodologies. We also define risk as a random function of these uncertainties and propose a risk quantification technique. Existing methods to handle aleatory uncertainties are discussed and an alternative search based decision making methodology is proposed to handle epistemic uncertainties. We illustrate our decision making methodology using the side-impact crashworthiness problem presented by Gu, et.al. [1]. In addition to the aleatory uncertainties considered by these researchers, we model a couple of non-design variables as epistemic uncertainties in our decision problem. Lack of information of these epistemic uncertainties increases the complexity of the side-impact crashworthiness problem significantly. However, the proposed methodology helps to identify a robust design with respect to epistemic uncertainty.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2009;():1105-1119. doi:10.1115/DETC2009-86587.

Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. As time progresses, the product may fail due to time phenomena such as time-dependent operating conditions, component degradation, etc. The degradation of reliability with time may increase the lifecycle cost due to potential warranty costs, repairs and loss of market share. In design for lifecycle cost, we must account for product quality, and time-dependent reliability. Quality is a measure of our confidence that the product conforms to specifications as it leaves the factory. Reliability depends on 1) the probability that the system will perform its intended function successfully for a specified interval of time (no hard failure), and 2) on the probability that the system response will not exceed an objectionable by the customer or operator, threshold for a certain time period (no soft failure). Quality is time-independent, and reliability is time-dependent. This article presents a design methodology to determine the optimal design of time-dependent, multi-response systems, by minimizing the cost during the life of the product. The conformance of multiple responses is treated in a series-system fashion. The lifecycle cost includes a production, an inspection, and an expected variable cost. All costs depend on quality and/or reliability. The key to our approach is the calculation of the so-called system cumulative distribution function (time-dependent probability of failure). For that we use an equivalent time-invariant “composite” limit state which is accurate for monotonic or non-monotonic in time, systems. Examples highlight the calculation of the cumulative distribution function and the design methodology for lifecycle cost.

Topics: Reliability , Design
Commentary by Dr. Valentin Fuster
2009;():1121-1136. doi:10.1115/DETC2009-86701.

For obtaining correct reliability-based optimum design, an input model needs to be accurately estimated in identification of marginal and joint distribution types and quantification of their parameters. However, in most industrial applications, only limited data on input variables is available due to expensive experimental testing costs. The input model generated from the insufficient data might be inaccurate, which will lead to incorrect optimum design. In this paper, reliability-based design optimization (RBDO) with the confidence level is proposed to offset the inaccurate estimation of the input model due to limited data by using an upper bound of confidence interval of the standard deviation. Using the upper bound of the confidence interval of the standard deviation, the confidence level of the input model can be assessed to obtain the confidence level of the output performance, i.e. a desired probability of failure, through the simulation-based design. For RBDO, the estimated input model with the associated confidence level is integrated with the most probable point (MPP)-based dimension reduction method (DRM), which improves accuracy over the first order reliability method (FORM). A mathematical example and a fatigue problem are used to illustrate how the input model with confidence level yields a reliable optimum design by comparing it with the input model obtained using the estimated parameters.

Commentary by Dr. Valentin Fuster
2009;():1137-1148. doi:10.1115/DETC2009-86703.

Due to expensive experimental testing costs, in most industrial engineering applications, only limited statistical information is available to describe the input uncertainty model. It would be unreliable to use an estimated input uncertainty model, such as distribution types and parameters including the standard deviations for the distributions, that is obtained from insufficient data for the design optimization. Furthermore, when input variables are correlated, we would obtain non-optimum design if we use the assumption of independency for the design optimization. In this paper, two methods for problems with lack of input statistical information — possibility-based design optimization (PBDO) and reliability-based design optimization (RBDO) with confidence level on the input model — are compared using a mathematical example and Abrams roadarm of an M1A1 tank. The comparison study shows that the PBDO could provide an unreliable optimum design when the number of samples is very small and that it provides an optimum design that is too conservative when the number of samples is relatively large. Furthermore, the optimum design does not converge to the optimum design obtained using the true input distribution as the number of samples increases. On the other hand, the RBDO with confidence level on the input model provides a conservative and reliable optimum design in a stable manner, and the optimum design converges to the optimum design obtained using the true input distribution as the number of samples increases.

Commentary by Dr. Valentin Fuster
2009;():1149-1159. doi:10.1115/DETC2009-86704.

A simulation-based, system reliability-based design optimization (RBDO) method is presented which can handle problems with multiple failure regions. The method uses a Probabilistic Re-Analysis (PRRA) approach in conjunction with a trust-region optimization approach. PRRA calculates very efficiently the system reliability of a design by performing a single Monte Carlo (MC) simulation. Although PRRA is based on MC simulation, it calculates “smooth” sensitivity derivatives, allowing therefore, the use of a gradient-based optimizer. The PRRA method is based on importance sampling. It provides accurate results, if the support (set of all values for which a function is non zero) of the sampling PDF contains the support of the joint PDF of the input random variables and, if the mass of the input joint PDF is not concentrated in a region where the sampling PDF is almost zero. A sequential, trust-region optimization approach satisfies these two requirements. The potential of the proposed method is demonstrated using the design of a vibration absorber, and the system RBDO of an internal combustion engine.

Topics: Simulation
Commentary by Dr. Valentin Fuster
2009;():1161-1169. doi:10.1115/DETC2009-86748.

A radial-contour mode disk resonator has its own advantages, less energy loss and less airflow damping, over existing counterparts such as surface acoustic wave (SAW) resonators and quartz crystal microbalance (QCM) sensors. Taking these advantages of the disk resonators, we design a biological mass sensor in this paper. One of the important challenges in the design of biological mass sensors is inherent uncertainties of MEMS fabrication processes that may strongly affect to the disk resonator performances. Parameters of main effect on the sensor performance (i.e., mass sensitivity, Sm ) are identified among many inputs based on response surface method screening process. The shape of the circular disk deviates from a desired perfect circle due to the fabrication uncertainty. Degree of deviation from perfect circularity significantly affects to the disk frequency. In addition, because of the presence of electrodes in sides, the disk rotation angle must be considered as a parameter that can affect the frequency. In this work, the disk resonator is designed to perform robust to the geometric parameter variations. A series of simulation models is developed to obtain natural frequency and mass sensitivity because analytical solutions cannot predict the resonant frequency variation originated such geometric variances. A non-deterministic metamodeling technique is introduced to replace the time consuming simulation models and used for the efficient local sensitivity analysis which is the main challenge of simulation based robust design. The design problem is to find the mean disk diameter in-between 800 μm and 1400μm to achieve robust maximum Sm . A mathematical construct, Error Margin Index (EMI) combining performance mean and deviation, is employed in the solution search algorithm to find a robust optimum design. Our design solution is the mean disk diameter of 1280 μm. The difference of mean mass sensitivity between traditional optimum design and our robust design is about 0.7μm2 /ng. The standard deviation of mass sensitivity at optimal design is high (0.68 μm2 /ng) and that of our design is low (0.39 μm2 /ng).

Commentary by Dr. Valentin Fuster
2009;():1171-1181. doi:10.1115/DETC2009-87084.

Traditional RBDO requires the sensitivity for both the most probable point (MPP) search in inverse reliability analysis and design optimization. However, the sensitivity is often unavailable or difficult to compute in complex multi-physics or multidisciplinary engineering applications. Hence, the response surface method (RSM) is often used to calculate both function evaluations and sensitivity effectively. Researchers have been developing the RSM for decades, and yet are still searching for an approach with an efficient sampling method for fast convergence while meeting the accuracy criteria. This paper proposes a new adaptive sequential sampling method to be integrated with the Kriging method for RBDO. By using the bandwidth of the prediction interval from the Kriging method, a new sampling strategy and a new local response surface accuracy criteria are proposed. In this sequential sampling method, the response surface is initiated using very few samples. An additional sampling point will then be determined by finding the point that has the largest absolute ratio between the bandwidth of the prediction interval and the predicted response within a neighboring area of current point of interest. The insertion of additional sampling will continue until the accuracy criterion of the response surface in the neighborhood of the current point of interest is achieved. Case studies show this proposed adaptive sequential sampling technique yields better result in terms of convergence speed compared with other sampling methods, such as the Latin hypercube sampling and the grid sampling, when the same sample size is used. Both a highly nonlinear mathematical example and a vehicle durability engineering example show that the proposed RSM yields accurate RBDO results that are comparable to the sensitivity-based RBDO results, as well as significant savings in computational time for function evaluation and sensitivity computation.

Commentary by Dr. Valentin Fuster
2009;():1183-1191. doi:10.1115/DETC2009-87268.

Reliability-Based Design Optimization (RBDO) is an effective method to handle an optimization problem constrained by reliability performance. In spite of its great benefits, one of the most challenging issues for implementing RBDO is associated with very intensive computational demands of Reliability Analysis (RA). Moreover, an accurate and efficient RA method is indispensible to apply RBDO to practical engineering design problems. Among various RA methods, an enhanced Dimension Reduction (eDR) method is the most popular one due to the high computational efficiency. It is very desirable to obtain an accurate and efficient RA result by using the minimum number of sampling points. But, it is difficult to determine it. That is because it depends on the nonlinearity of a constraint from approximating a model and the degree of uncertainty from integrating a design factor. In this research, eDR method with variable sampling points has been studied and proposed to resolve the early mentioned difficulties. The main idea of the suggested method is to employ a different number of axial sampling points for each random design factor. It is according to the nonlinearity of a constraint and the degree of uncertainty of each random design factor. For each random variable, it begins to use three points first and decides to stop or increase the axial sampling points based upon the proposed criteria in this study. In case of increasing sampling points, it is incremented by one sampling point and ended up five sampling points at most. As it shown in the result, the efficiency of eDR method with variable sampling points for each random variable is superior to the one with fixed sampling points without sacrificing any accuracy. Through the three representative RA problems, it is verified that the proposed RA method generates the result 26.5% more efficiently on average than the conventional eDR method with fixed sampling points. Furthermore, the Performance Measure Approach (PMA) was used to evaluate the performance of RBDO using the new RA method. For the comparison, three mathematical and one engineering RBDO problems were solved by both eDR method with variable sampling points and conventional one with fixed sampling points. Finally, the comparison results clearly demonstrate that RBDO using the suggested RA method is superior to the conventional one in terms of accuracy and efficiency.

Commentary by Dr. Valentin Fuster
2009;():1193-1203. doi:10.1115/DETC2009-87312.

Many real-world engineering design optimization problems are multi-objective and have uncertainty in their parameters. For such problems it is useful to obtain design solutions that are both multi-objectively optimum and robust. A robust design is one whose objective and constraint function variations under uncertainty are within an acceptable range. While the literature reports on many techniques in robust optimization for single objective optimization problems, very few papers report on methods in robust optimization for multi-objective optimization problems. The Multi-Objective Robust Optimization (MORO) technique with interval uncertainty proposed in this paper is a significant improvement, with respect to computational effort, over a previously reported MORO technique. In the proposed technique, a master problem solves a relaxed optimization problem whose feasible domain is iteratively confined by constraint cuts determined by the solutions from a sub-problem. The proposed approach and the synergy between the master problem and sub-problem are demonstrated by three examples. The results obtained show a general agreement between the solutions from the proposed MORO and the previous MORO technique. Moreover, the number of function calls for obtaining solutions from the proposed technique is an order of magnitude less than that from the previous MORO technique.

Commentary by Dr. Valentin Fuster
2009;():1205-1215. doi:10.1115/DETC2009-87430.

Although a variety of uncertainty propagation methods exist for estimating the statistical moments and the probability of failure in design under uncertainty, current methods suffer from their limitations in providing accurate and efficient solutions to high-dimension problems with interactions of random variables. A new sparse grid based uncertainty propagation method is proposed in this work to overcome this difficulty. The existing sparse grid technique, originally invented for numerical integration and interpolation, is extended to uncertainty propagation in the probabilistic domain. In particular, the concept of Sparse Grid Numerical Integration (SGNI) is extended for estimating the first two moments of performance in robust design, while the Sparse Grid Interpolation (SGI) is employed to determine failure probability by interpolating the limit-state function at the Most Probable Point (MPP) in reliability analysis. The proposed methods are demonstrated by several high-dimension mathematical examples with notable variate interactions and one complex multidisciplinary rocket design problem. Results show that the use of sparse grid methods works better than popular counterparts. Furthermore, the automatic sampling, special interpolation process, and dimension-adaptivity feature make SGI more flexible and efficient than using the uniform sample based metamodeling techniques.

Topics: Uncertainty
Commentary by Dr. Valentin Fuster
2009;():1217-1227. doi:10.1115/DETC2009-87434.

Statistical sensitivity analysis (SSA) is an effective methodology to examine the impact of variations in model inputs on the variations in model outputs at either a prior or posterior design stage. A hierarchical statistical sensitivity analysis (HSSA) method has been proposed in literature to incorporate SSA in designing complex engineering systems with a hierarchical structure. However, the original HSSA method only deals with hierarchical systems with independent subsystems. Due to the existence of shared variables at lower levels, responses from lower level submodels that act as inputs to a higher level subsystem are both functionally and statistically dependent. For designing engineering systems with dependent subsystem responses, an extended hierarchical statistical sensitivity analysis (EHSSA) method is developed in this work to provide a ranking order based on the impact of lower level model inputs on the top level system performance. A top-down strategy, same as in the original HSSA method, is employed to direct SSA from the top level to lower levels. To overcome the limitation of the original HSSA method, the concept of a subset SSA is utilized to group a set of dependent responses from lower level submodels in the upper level SSA. For variance decomposition at a lower level, the covariance of dependent responses is decomposed into the contributions from individual shared variables. To estimate the global impact of lower level inputs on the top level output, an extended aggregation formulation is developed to integrate local submodel SSA results. The importance sampling technique is also introduced to re-use the existing data from submodels SSA during the aggregation process. The effectiveness of the proposed EHSSA method is illustrated via a mathematical example and a multiscale design problem.

Commentary by Dr. Valentin Fuster
2009;():1229-1238. doi:10.1115/DETC2009-87571.

As the role of predictive models has increased, the fidelity of computational results has been of great concern to engineering decision makers. Often our limited understanding of complex systems leads to building inappropriate predictive models. To address a growing concern about the fidelity of the predictive models, this paper proposes a hierarchical model validation procedure with two validation activities: (1) validation planning (top-down) and (2) validation execution (bottom-up). In the validation planning, engineers define either the physics-of-failure (PoF) mechanisms or the system performances of interest. Then, the engineering system is decomposed into subsystems or components of which computer models are partially valid in terms of PoF mechanisms or system performances of interest. Validation planning will identify vital tests and predictive models along with both known and unknown model parameter(s). The validation execution takes a bottom-up approach, improving the fidelity of the computer model at any hierarchical level using a statistical calibration technique. This technique compares the observed test results with the predicted results from the computer model. A likelihood function is used for the comparison metric. In the statistical calibration, an optimization technique is employed to maximize the likelihood function while determining the unknown model parameters. As the predictive model at a lower hierarchy level becomes valid, the valid model is fused into a model at a higher hierarchy level. The validation execution is then continued for the model at the higher hierarchy level. A cellular phone is used to demonstrate the hierarchical validation of predictive models presented in this paper.

Commentary by Dr. Valentin Fuster
2009;():1239-1249. doi:10.1115/DETC2009-87713.

This paper presents an adaptive-sparse polynomial chaos expansion (adaptive-sparse PCE) method for performing engineering reliability analysis and design. The proposed method leverages three ideas: (i) an adaptive scheme to build sparse PCE with the minimum number of bivariate basis functions, (ii) a new projection method using dimension reduction techniques to effectively compute the expansion coefficients of system responses, and (iii) an integration of copula to handle nonlinear correlation of input random variables. The proposed method thus has three distinct features for reliability analysis and design: (a) no need of response sensitivities, (b) no extra cost to evaluate probabilistic sensitivity for design, and (c) capability to handle a nonlinear correlation. Besides, an error decomposition scheme of the proposed method is presented to help analyze error sources in probability analysis. Several engineering problems are used to demonstrate the effectiveness of the adaptive-sparse PCE method.

Commentary by Dr. Valentin Fuster
2009;():1251-1259. doi:10.1115/DETC2009-87804.

RBDO problems have been intensively studied for many decades. Since Hasofer and Lind defined a measure of the second-moment reliability index, many RBDO methods utilizing the concept of reliability index have been introduced as the Reliability Index Approach (RIA). In the RIA, a reliability analysis problem is formulated to find the reliability index for each performance constraint and the solutions are used to evaluate the failure probability. However, the traditional RIA suffers from inefficiency and convergence problems. In this paper, we revisited the definition of the reliability index and revealed the convergence problem in the traditional RIA. Furthermore, a new definition of the reliability index is proposed to correct this problem and a modified Reliability Index Approach based on this definition is developed. Numerical examples using both the traditional RIA and the modified RIA are compared and discussed.

Commentary by Dr. Valentin Fuster
2009;():1261-1265. doi:10.1115/DETC2009-86292.

In this paper a method for topology optimization of nonlinear elastic structures is suggested. The method is developed by starting from a total Lagrangian formulation of the system. The internal force is defined by coupling the second Piola-Kirchhoff stress to the Green-Lagrange strain via the Kirchhoff-St. Venant law. The state of equilibrium is obtained by first deriving the consistency stiffness matrix and then using Newton’s method to solve the non-linear equations. The design parametrization of the internal force is obtained by adopting the SIMP approach. The minimization of compliance for a limited value of volume is considered. The optimization problem is solved by SLP. This is done by using a nested approach where the equilibrium equation is linearized and the sensitivity of the cost function is calculated by the adjoint method. In order to avoid mesh-dependency the sensitivities are filtered by Sigmund’s filter. The final LP-problem is solved by an interior point method that is available in Matlab. The implementation is done for a general design domain in 2D by using fully integrated isoparametric elements. The implementation seems to be very efficient and robust.

Commentary by Dr. Valentin Fuster
2009;():1267-1278. doi:10.1115/DETC2009-86348.

Structural design for crashworthiness is a challenging area of research due to large plastic deformations and complex interactions among diverse components of the vehicle. Previous research in this field primarily focused on energy absorbing structures that utilize a desired amount of material. These structures have been shown to absorb a large amount of the kinetic energy generated during the crash event; however, the large plastic strains experienced can lead to failure. This research introduces a new strain-based topology optimization algorithm for crash-worthy structures undergoing large deformations. This technique makes use of the hybrid cellular automaton framework combining transient, non-linear finite-element analysis and local control rules acting on cells. The set of all cells defines the design domain. In the proposed algorithm, the design domain is dynamically divided into two sub-domains for different objectives, i.e., high strain sub-domain (HSSD) and low strain sub-domain (LSSD). The distribution of these sub-domains is determined by a plastic strain limit value. During the design process, the material is distributed within the LSSD following a fully-internal-energy-distribution principle. To accomplish that, each cell in the LSSD is driven to a prescribed target or set point value by modifying its stiffness. In the HSSD, the material is distributed to satisfy a failure criterion given by a maximum strain value. Results show that the new formulation and algorithm are suitable for practical applications. The case studies demonstrate the potential significance of the new capability developed for a wide range of engineering design problems.

Commentary by Dr. Valentin Fuster
2009;():1279-1286. doi:10.1115/DETC2009-86618.

This paper proposes a new level set-based topology optimization method for thermal problems that deal with generic heat transfer boundaries including design-dependent boundary conditions, based on the level set method and the concept of the phase field theory. First, a topology optimization method using a level set model incorporating a fictitious interface energy derived from the concept of the phase field theory is briefly discussed. Next, a generic optimization problem for thermal problems is formulated based on the concept of total potential energy. An optimization algorithm that uses the Finite Element Method when solving the equilibrium equation and updating the level set function is then constructed. Finally, several three-dimensional numerical examples are provided to confirm the utility and validity of the proposed topology optimization method.

Commentary by Dr. Valentin Fuster
2009;():1287-1293. doi:10.1115/DETC2009-86940.

In the past decades, the stagnant growth of battery technology becomes the bottle-neck of new generation of portable and wearable electronics which ask for longer work time and higher power consumption. Energy harvesting device based on the direct piezoelectric effect that converts ambient mechanical energy to usable electric energy is a very attractive energy source for portable and wearable electronics. This paper discusses the design of piezoelectric energy harvesting strap buckle that can generate as much as possible electric energy from the differential forces applying on the buckle. Topology optimization method is employed to improve the efficiency of piezoelectric energy harvesting strap buckle in a limited design space. A stiffness or displacement constraint is introduced to substitute material volume constraint in this problem formulation to avoid useless optimum result with nearly zero material volume. The sensitivities of both objective function and design constraint are derived from the adjoint method. A design example of piezoelectric energy harvesting strap buckle using the proposed topology optimization method is presented and the result is discussed.

Commentary by Dr. Valentin Fuster
2009;():1295-1305. doi:10.1115/DETC2009-87083.

A level-set-based method for robust shape and topology optimization (RSTO) is proposed in this work with consideration of uncertainties that can be represented by random variables or random fields. Uncertainty, such as those associated with loading and material, is introduced into shape and topology optimization as a new dimension in addition to space and time, and the optimal geometry is sought in this extended space. The level-set-based RSTO problem is mathematically formulated by expressing the statistical moments of a response as functionals of geometric shapes and loading/material uncertainties. Spectral methods are employed for reducing the dimensionality in uncertainty representation and the Gauss-type quadrature formulae is used for uncertainty propagation. The latter strategy also helps transform the RSTO problem into a weighted summation of a series of deterministic topology optimization subproblems. The above-mentioned techniques are seamlessly integrated with level set methods for solving RSTO problems. The method proposed in this paper is generic, which is not limited to problems with random variable uncertainties, as usually reported in other existing work, but is applicable to general RSTO problems considering uncertainties with field variabilities. This characteristic uniquely distinguishes the proposed method from other existing approaches. Preliminary 2D and 3D results show that RSTO can lead to designs with different shapes and topologies and superior robustness compared to their deterministic counterparts.

Commentary by Dr. Valentin Fuster
2009;():1307-1314. doi:10.1115/DETC2009-87460.

This paper provides two separate methodologies for implementing the Voronoi Cell Finite Element Method (VCFEM) in topological optimization. Both exploit two characteristics of VCFEM. The first approach utilizes the property that a hole or inclusion can be placed in the element: the design variables for the topology optimization are sizes of the hole. In the second approach, we note that VCFEM may mesh the design domain as n sided polygons. We restrict our attention to hexagonal meshes of the domain while applying Solid Isotropic Material Penalization (SIMP) material model. Researchers have shown that hexagonal meshes are not subject to the checker boarding problem commonly associated with standard linear quad and triangle elements. We present several examples to illustrate the efficacy of the methods in compliance minimization as well as discuss the advantages and disadvantages of each method.

Commentary by Dr. Valentin Fuster
2009;():1315-1323. doi:10.1115/DETC2009-87540.

The current paper examines the static performance of 2D infinite lattice materials with hexagonal Bravais lattice symmetry. Two novel microscopic cell topologies are proposed. The first topology is a semi-regular lattice that has the modified Schafli symbol 34 .6, which describes the type of regular polygons surrounding the joints of the lattice. Here, 34 .6 indicates four (4) regular triangles (3) successively surrounding a node followed by a regular hexagon (6). The second topology is an irregular lattice that is referred here as Double Hexagonal Triangulation (DHT). The lattice material is considered as a pin-jointed micro-truss where determinacy analysis of the material micro structure is used to distinguish between bending dominated and stretching dominated behaviours. The finite structural performance of unit cells of the proposed topologies is assessed by the matrix methods of linear algebra. The Dummy Node Hypothesis is developed to generalize the analysis to tackle any lattice topology. Collapse mechanisms and states of self-stress are deduced from the four fundamental subspaces of the kinematic and the equilibrium matrices of the finite unit cell structures, respectively. The generated finite structural matrices are employed to analyze the infinite structural performance of the lattice using the Bloch’s theorem. To find macroscopic strain fields generated by periodic mechanisms, the Cauchy-Born hypothesis is adopted. An explicit expression of the microscopic cell element deformations in terms of the macroscopic strain field is generated which is employed to derive the strain energy density of the lattice material. Finally, the strain energy density is used to derive the material macroscopic stiffness properties. The results showed that the proposed lattice topologies can support all macroscopic strain fields. Their stiffness properties are compared with those of lattice materials with hexagonal Bravais symmetry available in literature. The comparison showed that the lattice material with 34 .6 cell topology has superior isotropic stiffness properties. When compared with the Kagome’ lattice, the 34 .6 lattice generates isotropic stiffness properties, with additional stiffness to mass ratio of 18.5% and 93.2% in the direct and the coupled direct stiffness, respectively. However, it generates reduced shear stiffness to mass ratio by 18.8%.

Commentary by Dr. Valentin Fuster
2009;():1325-1339. doi:10.1115/DETC2009-86228.

In recent years, the interest of small and medium sized enterprises towards Virtual Reality (VR) systems is strongly increased thanks both to the improvement of VR tools effectiveness and to the cost reduction of technologies implementation. Due to the growing number of installed systems, many SMEs (Small Manufacturing Enterprises) companies require robust methods for evaluating technology performance. In this context, the present paper presents a metrics-based approach in order to analyze the VR system performance. It is specifically dedicated to the design review process during styling product design. The evaluation parameters are related to the effective communication and preservation of design intent. Metrics are classified in two main classes. The first one is related to the product, the process and the characteristics of VR technology. The second one is related to the design intent meanings preservation along the design process. Two experimental case studies are reported in order to test the approach in different operative fields.

Topics: Product design
Commentary by Dr. Valentin Fuster
2009;():1341-1353. doi:10.1115/DETC2009-87045.

Knowledge discovery in multi-dimensional data is a challenging problem in engineering design. For example, in trade space exploration of large design data sets, designers need to select a subset of data of interest and examine data from different data dimensions and within data clusters at different granularities. This exploration is a process that demands both humans, who can heuristically decide what data to explore and how best to explore it, and computers, which can quickly identify features that may be of interest in the data. Thus, to support this process of knowledge discovery, we need tools that go beyond traditional computer-oriented optimization approaches to support advanced designer-centered trade space exploration and data interaction. This paper is an effort to address this need. In particular, we propose the Interactive Multi-Scale Nested Clustering and Aggregation (iMSNCA) framework to support trade space exploration of multi-dimensional data common to design optimization. A system prototype of this framework is implemented to allow users to visually examine large design data sets through interactive data clustering, aggregation, and visualization. The paper also presents a case study involving morphing wing design using this prototype system. By using visual tools during trade space exploration, this research suggests a new approach to support knowledge discovery in engineering design by assisting diverse user tasks, by externalizing important characteristics of data sets, and by facilitating complex user interactions with data.

Commentary by Dr. Valentin Fuster
2009;():1355-1360. doi:10.1115/DETC2009-87155.

The objective of this research is to develop an immersive interface and a design algorithm to facilitate the synthesis of compliant mechanisms from a user-centered design perspective. Compliant mechanisms are mechanical devices which produce motion or force through deflection or flexibility of their parts. Using the constraint-based method of design, the design process relies on the designer to identify the appropriate constraint sets to match the desired motion. Currently this approach requires considerable prior knowledge of how non-linear flexible members produce motion. As a result, the design process is based primarily on the designer’s previous experience and intuition. A user-centered methodology is suggested where the interface guides the designer throughout the design process, thus reducing the reliance on intuitive knowledge. This methodology supports constraint-based design methods by linking mathematical models to support compliant mechanism design in an immersive virtual environment. A virtual reality (VR) immersive interface enables the designer to input the intended motion path by simply grabbing and moving the object and letting the system decide which constraint spaces apply. The user-centered paradigm supports an approach that focuses on the designer defining the motion and the system generating the constraint sets, instead of the current method which relies heavily on the designer’s intuition to identify appropriate constraints. The result is an intelligent design framework that will allow a broader group of engineers to design complex compliant mechanisms, giving them new options to draw upon when searching for design solutions to critical problems.

Commentary by Dr. Valentin Fuster
2009;():1361-1371. doi:10.1115/DETC2009-87294.

Thanks to recent advances in computing power and speed, designers can now generate a wealth of data on demand to support engineering design decision-making. Unfortunately, while the ability to generate and store new data continues to grow, methods and tools to support multi-dimensional data exploration have evolved at a much slower pace. Moreover, current methods and tools are often ill-equipped at accommodating evolving knowledge sources and expert-driven exploration that is being enabled by computational thinking. In this paper, we discuss ongoing research that seeks to transform decades-old decision-making paradigms rooted in operations research by considering how to effectively convert data into knowledge that enhances decision-making and leads to better designs. Specifically, we address decision-making within the area of trade space exploration by conducting human-computer interaction studies using multi-dimensional data visualization software that we have been developing. We first discuss a Pilot Study that was conducted to gain insight into expected differences between novice and expert decision-makers using a small test group. We then present the results of two Preliminary Experiments designed to gain insight into procedural differences in how novices and experts use multi-dimensional data visualization and exploration tools and to measure their ability to use these tools effectively when solving an engineering design problem. This work supports our goal of developing training protocols that support efficient and effective trade space exploration.

Topics: Decision making
Commentary by Dr. Valentin Fuster
2009;():1373-1382. doi:10.1115/DETC2009-87595.

In this paper, a haptic modeling and simulation system is developed to assist handheld product design. With haptic feedback, users could create, interact and evaluate the virtual product directly and intuitively without producing the physical prototype. This saves the cost and reduces time-to-market, which is especially meaningful for the rapidly changing handheld mobile devices. To provide a comfortable and accurate operation, a virtual vibration actuator is devised to add into the touch screen. Unlike the previous research that mainly focuses on the design of the product shape, the proposed system also models the interaction between the user (finger) or tool (pen) and handheld device (button/screen). To obtain realistic simulation and replace the physical prototype, the complex shape and deformation of the finger are considered when calculating the feedback force. A computational efficient collision detection method for complex shape objects is proposed to tackle the challenge of a high update rate of more than 1 kHz for real-time realistic haptic rendering. Moreover, the proposed system incorporates the haptic modeling of vibration interaction and menu interface design into the product design simulation system. A case study of handheld device design is used to illustrate the proposed system.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In