0

31st Design Automation Conference

2005;():5-12. doi:10.1115/DETC2005-84955.

The automotive industry is very competitive and companies are spending enormous amounts of resources on the development of new cars. The success of a new model is highly correlated to how well the designers and engineers have been able to blend features, functionality, quality and design to bring an attractive car to a certain segment of the market at the right time. Furthermore, as modern manufacturing techniques have enabled most manufacturers to offer standard features in their cars, the design has become a major selling point and one of the key factors for the ‘image’ associated with a company. However, the image, or form impression of a car, stated in natural language, is subtle and difficult to directly relate to concrete design parameters. With few tools to address this issue, designers are left to rely on their experience and sensitivity to current trends in order to meet the customer expectations for a new model. The purpose of the method reported in this paper is to provide a foundation for a design support system, which can help designers visualize and validate the complex relationship between form impressions and design parameters. This was achieved by expressing form impressions in natural language as sets of 10 weighted attributes. 14 design parameters were established to describe the basic shape of a car and data on the form impression for 31 different shapes was collected via a survey designed by the Taguchi method. Factor analysis was performed to extract correlated factors and eliminate the overlap of meaning between attributes. A neural network, able to relate form impressions expressed in these factors to basic proportions of a car, was created, trained and used to generalize design parameters corresponding to any form impression presented to it. Finally, a 3D-model with the desired form impression was automatically created by the CAD-system outlined in this paper. These results show that this method can be used to create a design support system, which has a sensibility to the form impressions various shapes will give.

Commentary by Dr. Valentin Fuster
2005;():13-23. doi:10.1115/DETC2005-85125.

Conceptual design is a key early activity in product development. However, limited understanding of the conceptual design process and lack of quantitative information at this stage of design pose difficulties for effective design concept generation and evaluation. In this paper, we propose a hierarchical co-evolutionary approach to supporting design concept generation and evaluation. The approach adopts a zigzag process in which grammar rules guide function decomposition and functions and means co-evolve at each level of decomposition hierarchy. It provides an automatic computational solution to complex conceptual design systems. In this paper, the details of the approach are described, and an example of designing a mechanical personal transporter is presented to show the effectiveness of the proposed approach.

Commentary by Dr. Valentin Fuster
2005;():25-31. doi:10.1115/DETC2005-85181.

Road Safety System development is complex task, which requires the collaboration between designers and accidentologists. However, designers and accidentologists do not share the same viewpoints, neither the same models to analyze an accident, nor the same technical language. This makes their communication a complex task in a design process. Accident Scenario is recognized as a powerful communication tool between designers and accidentologists. Nevertheless, an accident scenario has to be presented in a way that both designers and accidentologists can understand and use. To address this issue, we use the systemic approach (a complex system modeling approach) to develop a new methodology allowing constructing multi-view accident scenarios.

Topics: Road safety
Commentary by Dr. Valentin Fuster
2005;():33-42. doi:10.1115/DETC2005-85295.

We present a two-step technique for learning reusable design procedures from observations of a designer in action. This technique is intended for the domain of parametric design problems in which the designer iteratively adjusts the parameters of a design so as to satisfy the design requirements. In the first step of the two-step learning process, decision tree learning is used to infer rules that predict which design parameter the designer will change for any particular state of an evolving design. In the second step, decision tree learning is again used, but this time to learn explicit termination conditions for the rules learned in the first step. The termination conditions are used to predict how large of a parameter change should be made when a rule is applied. The learned rules, and termination conditions, can be used to automatically solve new design problems with a minimum of human intervention. Initial experiments with this technique suggest that it is considerably more efficient than the previous technique which was incapable of learning explicit rule termination conditions. In particular, the rule termination conditions allow a program to automatically solve new design problems with far fewer iterations than required with the previous approach.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():43-50. doi:10.1115/DETC2005-85303.

In traditional optimal control and design problems, the control gains and design parameters are usually derived to minimize a cost function reflecting the system performance and control effort. One major challenge of such approaches is the selection of weighting matrices in the cost function, which are usually determined via trial and error and human intuition. While various techniques have been proposed to automate the weight selection process, they either can not address complex design problems or suffer from slow convergence rate and high computational costs. We propose a layered approach based on Q-learning, a reinforcement learning technique, on top of genetic algorithms (GA) to determine the best weightings for optimal control and design problems. The layered approach allows for reuse of knowledge. Knowledge obtained via Q-learning in a design problem can be used to speed up the convergence rate of a similar design problem. Moreover, the layered approach allows for solving optimizations that cannot be solved by GA alone. To test the proposed method, we perform numerical experiments on a sample active-passive hybrid vibration control problem, namely adaptive structures with active-passive hybrid piezoelectric networks (APPN). These numerical experiments show that the proposed Q-learning scheme is a promising approach for.

Commentary by Dr. Valentin Fuster
2005;():51-58. doi:10.1115/DETC2005-84240.

Design collaboration is recognized as an effective approach in joint problem solving to achieve success of product development in distributed and heterogeneous environments. Design collaboration involves communication of design information, coordination of design activities, and negotiation of design conflicts between multi-disciplinary teams. To support these critical requirements in collaborative design, methodologies and software systems are needed. This paper shares our experience in the method and software development for a Web-enabled engineering object modeling environment. It presents our methods for interoperable and extensible design information modeling, for intelligent object behaviors embedment in CAD models, and for design information sharing across product lifecycle applications through a common vocabulary. The prototype implementation of the modeling environment provides standardized and localized engineering objects embedded with design semantics and intelligent behaviors for the information needs from multiple engineering software applications. The prototype also provides activity coordination and negotiation facilities through team setting, online visualization, live updating, conflict management, and messaging. Use scenarios are discussed in the paper.

Topics: Design , Modeling
Commentary by Dr. Valentin Fuster
2005;():59-69. doi:10.1115/DETC2005-84807.

Most complex systems, including engineering systems such as cars, airplanes, and satellites, are the results of the interactions of many distinct entities working on different parts of the design. Decentralized systems constitute a special class of design under distributed environments. They are characterized as large and complex systems divided into several smaller entities that have autonomy in local optimization and decision-making. A primary issue in decentralized design processes is to ensure that the designers that are involved in the process converge to a single design solution that is optimal and meets the design requirements, while being acceptable to all the participants. This is made difficult by the strong interdependencies between the designers, which are usually characteristic of such systems. This paper proposes a critical review of standard techniques to modeling and solving decentralized design problems, and shows mathematically the challenges created by having multiobjective subsystems. A method based on set-based design is then proposed to alleviate some of these challenging issues. An illustration of its applicability is given in the form of the design of a space satellite.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():71-79. doi:10.1115/DETC2005-85124.

This paper reviews recent developments and persisting challenges in facilitating collaborative engineering design. The review has two foci: 1) design information and representations, and 2) engineering workstations. Recent developments regarding these foci are discussed for their contribution to facilitating collaborative design. The paper concludes with directing attention to current challenges and recommendations for research.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():81-90. doi:10.1115/DETC2005-85160.

Recently, almost all industrially manufactured consumer goods have a high level of engineering excellence, and product designers face an increasingly difficult task of creating products that will stand out in a competitive marketplace. At present, users tend to base their purchasing decisions on the product’s degree of fitness to their preferences, not the degree of functional fulfillment that the product offers. The development of products that are more attractive to users requires the consideration of human preferences and sensibilities, so-called “Kansei,” as well as the skillful application of these factors to the design sequence. The process of identifying and clarifying Kansei suggests that personal preferences concerning a given product are strongly influenced by both the person’s environment and the circumstances in which the product will be used. Analyzing both of these clarifies the influence that subconscious desires and human nature have on the expression of Kansei. This paper proposes a method for extracting the Kansei of potential customers and applying it to product designs that aim to maximize their human appeal, rather than their technical superiority.

Topics: Product design
Commentary by Dr. Valentin Fuster
2005;():91-99. doi:10.1115/DETC2005-85428.

In this paper, it is illustrated how computational design methods such as design optimization and probabilistic analysis is applied to system simulation models in a web based framework. Special emphasis is given models defined in the Modelica modeling language. An XML-based information system for representation and management of design data for use together with Modelica models as well as other types of models is proposed. This approach introduces a separation between the model of the system and data related to the design of the product. This is important in order to facilitate the use of computational methods in a generic way. A web based framework for integration of simulation models and computational methods is further illustrated. The framework is based on open standards for distributed computing and enables so-called service oriented architecture. Finally, an example is presented, where design optimization and probabilistic analysis is carried out on a Modelica model of an aircraft actuation system using the proposed and implemented tools and methods.

Commentary by Dr. Valentin Fuster
2005;():101-108. doi:10.1115/DETC2005-84414.

Synergies and integration in design set a mechatronic system apart from a traditional, multi-disciplinary system. This paper proposes a method for the modularization and evaluation of different mechatronic design concepts in the early stages of product development processes. In order to consider the specific aspects of complex systems, a design metric is presented, which assists the design engineer in finding the best solution concept. For the description and evaluation of a complex mechatronic system, it is essential to decompose the total system into a hierarchical structure of mechatronic sub-modules. The number of levels in the decomposition, as well as the number of mechatronic modules involved, is indicative of the complexity of the design task.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():109-122. doi:10.1115/DETC2005-84956.

This research introduces an evolutionary design database model to describe design requirements and design results developed at different design stages from conceptual design to detailed design. In this model, the evolutionary design database is represented by a sequence of worlds corresponding to the design descriptions at different design stages. The design requirements and design results in each world are modeled using a database representation scheme that integrates both geometric descriptions and non-geometric descriptions. In each world, only the differences with its ancestor world are recorded. When the design descriptions in one world are changed, these changes are then propagated to its descendant worlds automatically. Consistency of the design descriptions in descendant worlds is also checked when design descriptions in an ancestor world are changed. Case study is conducted to show the effectiveness of this evolutionary design database model.

Topics: Design , Modeling , Databases
Commentary by Dr. Valentin Fuster
2005;():123-129. doi:10.1115/DETC2005-85034.

This paper proposes a spatial three degrees of freedom parallel kinematic machine enhanced by a passive leg and a web-based remote control system. First, the geometric model of the three degrees of freedom parallel kinematic machine is addressed; in the mechanism, a fourth kinematic link — a passive link connecting the base center to the moving platform center — is introduced. This last link is used to constrain the motion of the tool (located in the moving platform) to only three degrees of freedom, as well as to enhance the global stiffness of the structure and distribute the torque from machining. With the kinematic model, a web-based remote control approach is then applied. The concept of the web-based remote manipulation approach is introduced and the principles behind the method are explored in detail. Finally, an example of remote manipulation is demonstrated to the proposed 3-DOF structure using web-based remote control concept before conclusions.

Topics: Machinery
Commentary by Dr. Valentin Fuster
2005;():131-140. doi:10.1115/DETC2005-85403.

Design in general is about increasing the information of the product/system. Therefore it is natural to investigate the design process from an information theoretical point of view. There are basically two (although related) strains of information theory. The first is the information theory of communication. Another strain is the algorithmic theory of information. In this paper the design process is described as a information transformation process, where an initial set of requirements are transformed to a system specification. Performance and cost are both a functions of complexity and refinement, that can be expressed in information theoretical terms. The information theoretical model is demonstrated on examples. The model has implications for the balance between number of design parameters, and the degree of convergence in design optimization. Furthermore, the relationship between concept refinement and design space expansion can be viewed in information theoretical terms.

Commentary by Dr. Valentin Fuster
2005;():141-154. doi:10.1115/DETC2005-85546.

To meet the needs for product variety, many companies are shifting from a mass production mode to mass customization, which demands quick response to the needs of individual customers with high quality and low costs. One of the key elements of mass customization is that the product family can share some modules. The multifunction nature of mechanical components necessitates designers to redesign them each time a component’s function changes. This is the main obstacle to the practical mechanical product family modeling. In this paper, a graph grammar based mechanical product family modeling method is proposed. The other issue that is studied in this paper is tolerancing, which is a critical part of the product design process because it is intimately related to a product’s quality and costs. A functional tolerance specification method, called the mirror method, is proposed to provide guidelines for the component’s datum reference frame construction and generic and uniform functional tolerance specifications.

Commentary by Dr. Valentin Fuster
2005;():155-164. doi:10.1115/DETC2005-84685.

Design optimization algorithms have traditionally focused on lowering weight and improving structural performance. Although cost is a vital factor in every emerging design, existing tools lack key features and capabilities in optimizing designs for minimum product cost at acceptable performance levels. This paper presents a novel methodology for developing a decision support tool for designers based on manufacturing cost. The approach focuses on exploiting the advantages offered by combining parametric CAD, Finite element analysis, feature based cost estimation and optimization techniques within a single automated system. This methodology is then applied in optimizing the geometry for minimum manufacturing cost of an engine mounting link from a Rolls-Royce civil aircraft engine.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():165-175. doi:10.1115/DETC2005-84765.

The House of Quality is a popular tool that supports information processing and decision making in the engineering design process. While its application is an aid in conceptual aspects of the design process, its use as a quantitative decision support tool in engineering design is potentially flawed. This flaw is a result of assumptions behind the methodology of the House of Quality and is viewed as an important deficiency that can lead to potentially invalid and poor decisions. In this paper this deficiency and its implications are explored both experimentally and empirically. The resulting conclusions are important to future use and improvement of the House of Quality as an engineering design tool.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():177-187. doi:10.1115/DETC2005-84766.

Supporting the decision of a group in engineering design is a challenging and complicated problem when issues like consensus, consistency, conflict, and compromise must be taken into account. In this paper, we present two developments extending the Group Hypothetical Equivalents and Inequivalents Method (Group-HEIM) and making it applicable to new classes of group decision problems. The first extension focuses on handling forms of value functions other than the traditional L1 -norm. The second extension focuses on updating the formulation to place unequal importance on the preferences of the group members. Typically, there are some group members whose experience, education, and/or knowledge makes their input more important. The formulation presented in this paper allows team leaders to emphasize the input from certain group members. Illustration and validation of the developments are presented using a vehicle selection problem. Data from twelve engineering design teams is used to demonstrate the application of the method.

Topics: Design , Teams
Commentary by Dr. Valentin Fuster
2005;():189-198. doi:10.1115/DETC2005-84812.

An important aspect of product development is design for manufacturability (DFM) analysis that aims to incorporate manufacturing requirements into early product decision-making. Existing methods in DFM seldom quantify explicitly the tradeoffs between revenues and costs generated by making design choices that may be desirable in the market but costly to manufacture. This paper builds upon previous work coordinating models for engineering design and marketing product line decision-making by incorporating quantitative models of manufacturing investment and production allocation. The result is a methodology that considers engineering design decisions quantitatively in the context of manufacturing and market consequences in order to resolve tradeoffs, not only among performance objectives, but also between market preferences and manufacturing cost.

Commentary by Dr. Valentin Fuster
2005;():199-211. doi:10.1115/DETC2005-85147.

Flexible systems maintain a high performance level under changing operating conditions or design requirements. Flexible systems acquire this powerful feature by allowing critical aspects of their design con guration to change during the operating life of the product or system. In the design of such systems, designers are often required to make critical decisions regarding the exible and the non-exible aspects of the design con guration. We propose an optimization based methodology to design exible systems that allows a designer to effectively make such critical decisions. The proposed methodology judiciously generates candidate optimal design versions of the exible system. These design versions are evaluated using multiobjective techniques in terms of the level of exibility and the associated penalty. A highly exible system maintains optimal performance under changing operating conditions, but could result in increased cost and complexity of operation. The proposed methodology provides a systematic approach for incorporating designer preferences and selecting the most desirable design version — a feature absent in several recently proposed exible system design frameworks. The developments of this paper are demonstrated with the help of a exible three-bar-truss design example.

Commentary by Dr. Valentin Fuster
2005;():213-218. doi:10.1115/DETC2005-84541.

Collecting design error or failure information in a database (FKDB: Failure Knowledge Database) gives an organization an effective place for designers to study and learn past events so they will not repeat the same mistakes in their own design. When a designer makes an error, however, he had not foreseen the mistake at all. Once made, the error may seem trivial and even predictable, however, at the time of design, the problem and facts that surround the error are completely concealed from the designers mind. The designer, therefore, has no intention to look at past failure information that relate to the error he is repeating at the time of his design. This often makes the FKDB, despite of all the efforts in collecting the information it holds, a mere collection of past failure cases waiting for its passive use; the designer may occasionally look it up for the purpose of general study. A group of people including one of the authors, in the past, developed a conceptual design tool, Creative Design Engine (CDE) that helps the designer by displaying mechanisms, machines, sub-assemblies, and related information that realize functional requirements that the designer wants to accomplish. The tool effectively brings the designer’s consciousness to ideas new to him or something that escaped his mind at the time of conceptual design. We analyzed this tool and laid out the modifications necessary so that it not only displays design solutions and alternative options to the designer but also gives warnings to the designer about design error he is about to make during conceptual design. The application will constantly monitor the designer’s intention to compare it to known failures in the FKDB.

Commentary by Dr. Valentin Fuster
2005;():219-224. doi:10.1115/DETC2005-84542.

On March 26, 2004, a six-year-old boy ran into an automatic revolving door when it was about to close. The door caught the boy’s head and killed him. The accident immediately caught the attention of mass media, police, government, and the people. Amidst all the opinions and talks about how dangerous these automatic revolving doors are and how safety measures should be installed, we organized a group of volunteers to analyze the dynamics of the accident to measure the forces, door velocity, acceleration, and if available, the driving current and voltage of the motors. The group not only studied the same door that caused the fatal accident but it also ran the same series of tests on a smaller size power-assisted revolving door, an automatic sliding door, an elevator door, a building shutter, a commuter train door, a bullet train door, an automatic sliding door on an automobile, and its power window. With the safety mechanisms disabled, we measured an impact force of 548kgf on a dummy head of a 3-year old when it was jammed between the revolving door edge and the door frame. The human scull of a child crushes at only 100kgf and our results show that in addition to this large automatic revolving door, the smaller size power-assisted revolving door, the shutter, and manually closing automobile doors generate forces that exceed this limit. These doors inherit the danger of causing fatal accidents.

Topics: Doors , Accidents
Commentary by Dr. Valentin Fuster
2005;():225-230. doi:10.1115/DETC2005-84543.

In and after about the year 2000, organizations have started to build databases of workers’ accidents, troubles in the production processes, and customer complaints to make positive use of such failure information. For quantifying such organizational applications and clarifying their problems, we developed a new worksheet, “Failure Knowledge Application Evaluation Sheet (FKAES)”, and conducted a survey by having members of the Association for the Study of Failure fill out the worksheet. Our research disclosed the following facts with organizations. They properly feedback failures that require action in the production and inspection processes, however, do not identify those that require action in the planning or development processes as failures because they have organizational causes rather than technical. Large corporations with 1,000 or more employees practice more applications than smaller ones, and some even publicize their failure applications to customers and stockowners.

Commentary by Dr. Valentin Fuster
2005;():231-241. doi:10.1115/DETC2005-84555.

In product design and manufacturing, robust design leads to a product that has good quality. Robust design is reviewed in two categories: one is the process and the other is the robustness index. The process means efficient manipulation of the mean response and the variance. The robustness index indicates a measure of insensitiveness with respect to the variation. To improve existing methods, a three-step robust design (TRD) is proposed. The first step is “reduce the variance,” the second is “find multiple candidate designs,” and the third is “select the optimum robust design by using the robustness index,” Furthermore, a new robustness index is introduced in order to accommodate the characteristics of the probability of success in axiomatic design and the Taguchi’s loss function. The new robustness indices are compared with the existing ones. The developed robust design process is verified by examples and the results using the robustness index are compared with those of other indices.

Topics: Design , Robustness
Commentary by Dr. Valentin Fuster
2005;():243-252. doi:10.1115/DETC2005-84259.

Process planning is a key product development activity that links design and manufacturing, and is traditionally carried out based on the outcome of the design process. One of the consequences of conducting process planning after design is that the process planning (manufacturability) information needed in the execution of upstream activities is in most cases not available formally. Using informal manufacturability information in the early phases of the product development process can lead to e.g. untrustworthy feasibility study or unnecessary design iterations. As an attempt to solve this problem, a modular procedure for execution of process planning activities is proposed in this paper. It allows for the execution of some of the process planning activities to commence as soon as the details of the order and the requirements for the product are known. The goal is to ensure that formal manufacturability information is available in various stages of the product development process, including to those activities that take place prior to process planning. The new modular process planning procedure has been applied, and it has been found that the design iterations caused by lack of manufacturability information can be avoided. This paper first defines the problem and presents related works. It then introduces the modular process planning procedure, and presents an application case study.

Commentary by Dr. Valentin Fuster
2005;():253-263. doi:10.1115/DETC2005-84425.

This research introduces a new approach to model the non-linear relations among different design evaluation measures and to achieve the optimal design considering these different design evaluation measures through multi-objective optimization. In this approach, different design evaluation measures are mapped to comparable design evaluation indices. The non-linear relation between a design evaluation measure and its design evaluation index is identified based on the least-square curve-fitting method. The weighting factors for different design evaluation indices, representing the importance measures of these indices in the multi-objective design optimization, are achieved using the pair-wise comparison method. A case study example of automobile caliper disc brake design considering 4 different design evaluation measures is given to illustrate the effectiveness of the introduced approach.

Commentary by Dr. Valentin Fuster
2005;():265-275. doi:10.1115/DETC2005-84790.

Design of modern engineering products requires complexity management. Several methodologies for complex system optimization have been developed in response. Single-level strategies centralize decision-making authority, while multi-level strategies distribute the decision-making process. This article studies the impact of coupling strength on single-level Multidisciplinary Design Optimization formulations, particularly the Multidisciplinary Feasible (MDF) and Individual Disciplinary Feasible (IDF) formulations. The Fixed Point Iteration solution strategy is used to motivate the analysis. A new example problem with variable coupling strength is introduced, involving the design of a turbine blade and a fully analytic mathematical model. The example facilitates a clear illustration of MDF and IDF and provides an insightful comparison between these two formulations. Specifically, it is shown that MDF is sensitive to variations in coupling strength, while IDF is not.

Commentary by Dr. Valentin Fuster
2005;():277-287. doi:10.1115/DETC2005-84853.

In this paper, we present the development and application of a Technical Feasibility Model (TFM) used in preliminary design to determine whether or not a set of desired product specifications is technically feasible, and the optimality of those specifications with respect to the Pareto frontier. The TFM is developed by integrating the capabilities of a multidisciplinary design framework, a multi-objective design optimization tool, a Pareto set gap analyzer, metamodeling methods, and mathematical methods for feasibility assessment. This tool is then applied to a three objective example problem and to a five objective passenger vehicle design problem by analyzing benchmarking data from 78 late model sedans.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():289-294. doi:10.1115/DETC2005-84942.

This paper deals with development of Genetic Range Genetic Algorithms (GRGAs). In GRGAs, one of the key is to set a new searching range, it needs to be followed after current searching situations, to be focused on local minute search and to be scattered as widely as possible for global search. However, first two strategies have a possibility of early stage convergence, and random scattering cause vain function calls to produce the range which seems no chance to prosper for a number of generations. In this paper, we propose a new method of setting it by using Particle Swarm Optimization (PSO) to overcome dilemma of the conventional method.

Commentary by Dr. Valentin Fuster
2005;():295-303. doi:10.1115/DETC2005-85202.

One area in design optimization is component based design where the designer has to choose between many different discrete alternatives. These types of problems have discrete character and in order to admit optimization an interpolation between the alternatives is often performed. However, in this paper a modified version of the non-gradient algorithm the Complex method is developed where no interpolation between alternatives is needed. Furthermore, the optimization algorithm itself is optimized using a performance metric that measures the effectiveness of the algorithm. In this way the optimal performance of the proposed discrete Complex method has been identified. Another important area in design optimization is the case of optimization based on simulations. For such problems no gradient information is available, hence non-gradient methods are therefore a natural choice. The application for this paper is the design of an industrial robot where the system performance is evaluated using comprehensive simulation models. The objective is to maximize performance with constraints on lifetime and cost, and the design variables are discrete choices of gear boxes for the different axes.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():305-319. doi:10.1115/DETC2005-85245.

This paper proposes a new design optimization framework by integrating evolutionary search and cumulative function approximation. While evolutionary algorithms are robust even under multi-peaks, rugged natures, etc., their computational cost is inferior to ordinary schemes such as gradient-based methods. While response surface techniques such as quadratic approximation can save computational cost for complicated design problems, the fidelity of solution is affected by density of samples. The new framework simultaneously performs evolutionary search and constructs response surfaces. That is, in its early phase the search is performed over roughly but globally approximated surfaces with the relatively small number of samples, and in its later phase the search is performed intensively around promising regions, which are revealed in the preceded phases, over response surfaces enhanced with additional samples. This framework is expected to be able to robustly find the optimal solution with less sampling. An optimization algorithm is implemented by combining a real-coded genetic algorithm and a Voronoi diagram based cumulative approximation, and it is applied to some numerical examples for discussing its potential and promises.

Commentary by Dr. Valentin Fuster
2005;():321-330. doi:10.1115/DETC2005-85305.

In this paper we are going to show that applying the twinkling technique on a naive random search algorithm is frequently more powerful than any algorithm using specific research techniques unless they use information provided by the gradient or the Hessian. In order to illustrate this result we have made the choice of the study of a mechanical system characterized by the non-linear nature of the optimization space. This system is basically an open kinematics chain that represents a robot which has to go through various different trajectories defined by a set of temporally equidistant points. In fact, we are going to show that the genetic algorithm, the simulated annealing algorithm, the particle swarm algorithm, the random search algorithm, need to use comparatively, a huge number of function evaluations in order to perform the same result quality.

Commentary by Dr. Valentin Fuster
2005;():331-336. doi:10.1115/DETC2005-85342.

In this paper the Variable Topography Distance Transform method (VTDT) is used to find optimal paths across physical landscapes for pipelines carrying a two-phase geothermal fluid. The method incorporates constraints such as obstacles, land costs, building costs, variable gradients, height, and environmental issues in the route selection process. The method is an expanded form of Distance Transform algorithms that are used in image processing. It offers a way to look at land surfaces as a slope-adjusted 2-D model rather than as a more complex and computationally intensive 3-D model. The VTDT method works with a digital representation, called a Digital Elevation Model (DEM), of the landscape in question. The method is then tested on the route design for pipelines carrying a two-phase geothermal fluid at the Hellisheidi Power Plant in Iceland, which is currently (early 2005) in the design and construction phase.

Topics: Design , Pipes
Commentary by Dr. Valentin Fuster
2005;():337-344. doi:10.1115/DETC2005-85348.

The purpose of this work is to develop a novel optimization process for the design of space frames. The main objective is to minimize the space frame volume and consider stress constraints satisfaction. A finite element program is devised to synthesize 3D-space frames and aid in its topology optimization. The program is verified through different elementary problems with known analytical solutions as well as with commercial packages. A Midi-Bus frame is modeled with about 300 members and analyzed for a severe road model condition. The optimization effectively uses the devised Heuristic Gradient Projection (HGP) technique to synthesize the optimum Midi-Bus frame. Results indicate a marked improvement over available designs and remarkably faster convergence over other optimization techniques. This technique can thus be effectively applied to other large 3D space frame synthesis and optimization.

Commentary by Dr. Valentin Fuster
2005;():345-354. doi:10.1115/DETC2005-85353.

In this paper, a new methodology to obtain an optimal structure size considering geometries nonlinearity is presented. This method makes use of Heuristic Gradient Projection method in addition to Fuzzy Logic. The Heuristic Gradient Projection (HGP) method, a previously developed method for 3D-frame design and optimization, utilizes mainly bending stress relations in order to simplify the process of iterations. HGP is based on comparing the resulting equivalent stress with the allowable stress value. The proposed Fuzzy Heuristic Gradient Projection (FHGP) approach incorporates both bending stress and axial stress when processing with the allowable stress value. The weighting factors of both axial and bending stresses are found using a Fuzzy Logic controller. Fuzzy logic is incorporated to reach an optimal solution with lesser number of function evaluations. A simple cantilever example, subjected to axial force and bending moment, is presented to illustrate this approach in addition to a 10-member planar frame that is used to prove the efficacy of the new method. FHGP approach generally results in faster convergence.

Commentary by Dr. Valentin Fuster
2005;():355-363. doi:10.1115/DETC2005-85449.

Design space exploration during conceptual design is an active research field. Most approaches generate a number of feasible design points (complying with the constraints) and apply graphical post-processing to visualize correlations between variables, the Pareto frontier or a preference structure among the design solutions. The generation of feasible design points is often a statistical (Monte Carlo) generation of potential candidates sampled within initial variable domains, followed by a verification of constraint satisfaction, which may become inefficient if the design problem is highly constrained since a majority of candidates that are generated do not belong to the (small) feasible solution space. In this paper, we propose to perform a preliminary analysis with Constraint Programming techniques that are based on interval arithmetic to dramatically prune the solution space before using statistical (Monte Carlo) methods to generate candidates in the design space. This method requires that the constraints are expressed in an analytical form. A case study involving truss design under uncertainty is presented to demonstrate that the computation time for generating a given number of feasible design points is greatly improved using the proposed method. The integration of both techniques provides a flexible mechanism to take successive design refinements into account within a dynamic process of design under uncertainty.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():365-370. doi:10.1115/DETC2005-84330.

A methodology to reach the best CHP plant design is proposed. The standard method to choose the best fit in the process design of a CHP plant is improved considering off-design simulation of the pre-selected schemes. The off-design simulation deals specifically with economic dispatch optimization applied on each pre-selected plant to calculate the operation performance under well-known heat and power loads. The economic dispatch include fuel, uniform series payments related with investment and operation & maintenance costs and evaluates the variable behavior for power and heat along time with a scenario that takes into account transactions with the utility grid as well as auxiliary or back up boilers. It is shown the different result reached in each approach and the new point of view gotten with the use of the methodology proposed.

Commentary by Dr. Valentin Fuster
2005;():371-380. doi:10.1115/DETC2005-84633.

This paper presents some approaches to the optimal design of stacked-ply composite flywheels. The laminations of the disk are constructed such that the principal fiber direction is either tangential or radial. Here, optimization problems are formulated to maximize the energy density of the flywheel. This is accomplished by allowing arbitrary, continuous, variation of the orientation of the fibers in the radial plies. The paper compares designs based on minimizing cost functions related to the (i) the maximum stress, (ii) the maximum strain and (iii) the Tsai-Wu failure criteria. It is shown that the optimized designs provide an improvement in the flywheel energy density when compared to a standard stacked-ply design. The results also show that, for a given disk design, the estimate of the energy density can vary greatly depending on the failure criteria employed.

Topics: Flywheels , Failure
Commentary by Dr. Valentin Fuster
2005;():381-390. doi:10.1115/DETC2005-85136.

This paper discusses the roles that a graduate student coach experienced while working with an undergraduate design team in the development of a low, cost, low volume plastic injection modeling machine. Identified roles include: design tool teacher, design reviewer, project manager, and customer. A critique of the roles, including times spent in each role, is provided. This experience created generally higher satisfaction among the students and among the customers than had been previously seen in similar projects. Based upon this experience, it is justified to consider incorporating graduate design students as design coaches in senior design project teams.

Commentary by Dr. Valentin Fuster
2005;():391-399. doi:10.1115/DETC2005-85231.

During the last decade, digital prototyping has become a natural part of any industrial project dealing with product development. The reasons for this differ, but the two most obvious is time saving aspects and the amount of cost effectiveness achieved when replacing the physical prototype with the cheaper digital. Time and cost are equally, or even more critical in academic projects. This paper describes the usage of a low cost demonstrator as a mean to reduce both time and cost during a product development project course as well as to guarantee educational quality. The paper also discusses the reason for using demonstrators in an industrial environment. When large product development project courses are given at educational engineering programs, they often strive for imitating a real industrial situation, trying to include all the phases and aspects of product realization. Time is of course critical in both environments, industrial and academic, but for slightly different reasons. A typical industrial project may run over several years while a large educational project’s duration is counted in months. Thus, if the course tutor wants to simulate the whole product development process, within the same project course, there are needs for means that may speed up the project without spoiling the educational message as well as the industrial authenticity.

Commentary by Dr. Valentin Fuster
2005;():401-412. doi:10.1115/DETC2005-85111.

An engineer presented with a design challenge often creates a symmetric solution. For instance, consider a table (front-back and left-right symmetry), a car (left and right symmetry), a bridge (front-back and left-right symmetry), or the space shuttle (left-right) symmetry. These examples may not be 100% symmetric, but their overriding features are remarkably similar. The reasons for the design of symmetric structures is not always clear. In some cases, like the table, symmetry may be a tradition. Similarly, the symmetry may be for aesthetic reasons. However in automated design algorithms, especially stochastic techniques, the output is often largely asymmetric, One reason for this is that fitness functions are not rewarded for symmetry. A possible resolution to this is to add a reward function for symmetry. Unfortunately, this approach is computationally intractable as well as arbitrary. In this paper a Genetic Algorithm based method is presented that rewards re-use of parts. The method is applied to a simple, idealized situation as well as to real design case. The results show that in some situations, symmetry naturally emerges from the synthesis, but that it does not provide clear performance advantages over asymmetric configurations.

Commentary by Dr. Valentin Fuster
2005;():413-426. doi:10.1115/DETC2005-85322.

Multi-functional design problems are characterized by strong coupling between design variables that are controlled by stakeholders from different disciplines. This coupling necessitates efficient modeling of interactions between multiple designers who want to achieve conflicting objectives but share control over design variables. Various game-theoretic protocols such as cooperative, non-cooperative, and leader/follower have been used to model interactions between designers. Non-cooperative game theory protocols are of particular interest for modeling cooperation in multi-functional design problems. These are the focus of this paper because they more closely reflect the level of information exchange possible in a distributed environment. Two strategies for solving such non-cooperative game theory problems are: a) passing Rational Reaction Sets (RRS) among designers and combining these to find points of intersection and b) exchanging single points in the design space iteratively until the solution converges to a single point. While the first strategy is computationally expensive because it requires each designer to consider all possible outcomes of decisions made by other designers, the second strategy may result in divergence of the solution. In order to overcome these problems, we present an interval-based focalization method for executing decentralized decision-making problems that are common in multi-functional design scenarios. The method involves propagating ranges of design variables and systematically eliminating infeasible portions of the shared design space. This stands in marked contrast to the successive consideration of single points, as emphasized in current multifunctional design methods. The key advantages of the proposed method are: a) targeted reduction of design freedom and b) non-divergence of solutions. The method is illustrated using two sample scenarios — solution of a decision problem with quadratic objectives and the design of multi-functional Linear Cellular Alloys (LCAs). Implications include use of the method to guide design space partitioning and control assignment.

Commentary by Dr. Valentin Fuster
2005;():427-436. doi:10.1115/DETC2005-85414.

This research investigates the use of quantitative measures of performance to aid the grammatical synthesis of mechanical systems. Such performance measures enable search algorithms to be used to find designs that meet requirements and optimize performance by using automatically generated performance feedback, including behavioral simulation, as a guide. The work builds on a new type of production system, a parallel grammar for mechanical systems based on a Function-Behavior-Structure representation, to generate an extensive variety of designs. Geometric and topological constraints are used to bound the design space, termed the language of the grammar, to ensure the validity of designs generated. The winding mechanism of an electromechanical camera is examined as a case study using the behavioral modeling language Modelica. Behavioral simulations are run for parametric models generated by the parallel grammar and this data is used, in addition to geometric performance metrics, for performance evaluation of generated alternative designs. Multi-objective stochastic search, in the form of a hybrid pattern search developed as part of this research, is used to generate Pareto sets of optimally directed designs of winding mechanisms, showing the design of the camera chosen for the case study to be optimally directed with respect to the design objectives considered. The Pareto sets generated illustrate the range of simulation-driven solutions that can be generated and simulated automatically as well as their performance tradeoffs.

Commentary by Dr. Valentin Fuster
2005;():437-446. doi:10.1115/DETC2005-85486.

This paper documents a meta-analysis of 113 data sets from published factorial experiments. The study quantifies regularities observed among factor effects and multi-factor interactions. Such regularities are known to be critical to efficient planning and analysis of experiments and to robust design of engineering systems. Three previously observed properties are analyzed — effect sparsity, hierarchy, and heredity. A new regularity is introduced and shown to be statistically significant. It is shown that a preponderance of active two-factor interaction effects are synergistic, meaning that when main effects are used to increase the system response, the interaction provides an additional increase. The potential implications for robust design are discussed.

Commentary by Dr. Valentin Fuster
2005;():447-455. doi:10.1115/DETC2005-84214.

In this paper, we discuss a way to extend a geometric surface feature framework known as Direct Surface Manipulation (DSM) into a volumetric mesh modeling paradigm that can be directly adopted by large-scale CAE applications involving models made of volumetric elements, multiple layers of surface elements or both. By introducing a polynomial-based depth-blending function, we extend the classic DSM mathematics into a volumetric form. The depth-blending function possesses similar user-friendly features as DSM basis functions permitting ease-of-control of the continuity and magnitude of deformation along the depth of deformation. Practical issues concerning the implementation of this technique are discussed in details and implementation results are shown demonstrating the versatility of this volumetric paradigm for direct modeling of complex CAE mesh models. In addition, the notion of a model-independent, volumetric-geometric feature is introduced. Motivated by modeling clay with sweeps and templates, a model-independent, catalog-able volumetric feature can be created. Deformation created by such a feature can be relocated, reoriented, duplicated, mirrored, pasted, and stored independent of the model to which it was originally applied. It can serve as a design template, thereby saving the time and effort to recreate it for repeated uses on different models (frequently seen in CAE-based Design of Experiments study).

Commentary by Dr. Valentin Fuster
2005;():457-466. doi:10.1115/DETC2005-84287.

The work described in this paper seeks to minimize the time spent on manually reducing thin-walled CAD-geometry into surface idealizations. The purpose of the geometrical idealizations is the creation of shell element meshes for FE-calculations. This is motivated by time and thereby cost savings and also to make the results of the calculations available earlier in the product development process allowing the results to guide the designs to a larger extent. Systems for automated geometry idealization and creation of FE-models already exist, but this paper describes a novel approach with the working principle of analyzing how the CAD-specific features of the CAD-file history tree are constituted. This information is used to automatically create the best practice geometrical idealization in the same CAD-model. Evaluation of the performance of the system in an industrial example is also presented.

Commentary by Dr. Valentin Fuster
2005;():467-478. doi:10.1115/DETC2005-84343.

In this research we describe a computer-aided approach to geometric tolerance analysis for assemblies and mechanisms. This new tolerance analysis method is based on the “generate-and-test” approach. A series of as-manufactured component models are generated within a NURBS-based solid modeling environment. These models reflect errors in component geometry that are characteristic of the manufacturing processes used to produce the components. The effects of different manufacturing process errors on product function is tested by simulating the assembly of these imperfect-form component models and measuring geometric attributes of the assembly that correspond to product functionality. A tolerance analysis model is constructed by generating-and-testing a sequence of component variants that represent a range of manufacturing process capabilities. The generate-and-test approach to tolerance analysis is demonstrated using a case study that is based on a high-speed stapling mechanism. As-manufactured models that correspond to two different levels of manufacturing precision are generated and assembly between groups of components with different precision levels is simulated. Misalignment angles that correspond to functionality of the stapling mechanism are measured at the end of each simulation. The results of these simulations are used to build a tolerance analysis model and to select a set of geometric form and orientation tolerances for the mechanism components. It is found that this generate-and-test approach yields insight into the interactions between individual surface tolerances that would not be gained using more traditional tolerance analysis methods.

Commentary by Dr. Valentin Fuster
2005;():479-486. doi:10.1115/DETC2005-84738.

Rapid and representative reconstruction of geometric shape models from surface measurements has applications in diverse arenas ranging from industrial product design to biomedical organ/tissue modeling. However, despite the large body of work, most shape models have had limited success in bridging the gap between reconstruction, recognition, and analysis due to conflicting requirements. On one hand, large numbers of shape parameters are necessary to obtain meaningful information from noisy sensor data. On the other hand, search and recognition techniques require shape parameterizations/abstractions employing few robust shape descriptors. The extension of such shape models to encompass various analysis modalities (in the form of kinematics, dynamics and FEA) now necessitates the inclusion of the appropriate physics (preferably in parametric form) to support the simulation based refinement process. Thus, in this paper we discuss development of a class of parametric shape abstraction models termed as extended superquadrics. The underlying geometric and computational data structure intimately ties together implicit-, explicit- and parametric- surface representation together with a volumetric solid representation that makes them well suited for shape representation. Furthermore, such models are well suited for transitioning to analysis, as for example, in model-based non rigid structure and motion recovery or for mesh generation and simplified volumetric-FEA applications. However, the development of the concomitant methods and benchmarking is necessary prior to widespread acceptance. We will explore some of these aspects further in this paper supported with case studies of shape abstraction from image data in the biomedical/life-sciences arena whose diversity and irregularities pose difficulties for more traditional models.

Commentary by Dr. Valentin Fuster
2005;():487-496. doi:10.1115/DETC2005-85045.

Surfaces, like planes, cylinders or spheres, are basic primitive surfaces not only for mechanical engineering but also for aesthetic design, world of free-form surfaces, where they are essentially used to answer some functional constraints, like assembling and manufacturing ones, or to achieve specific light effects. The early design steps are characterised by the uncertainty in the definition of the precise geometry and most of the time, product constraints are only partially available. Unfortunately, until now, the insertion of primitive surfaces requires precise curve and surface specifications together with trimming operations, thus imposing that the free-form geometry is recreated each time a modification occurs. In this paper we present a method for the insertion of planar surfaces suitable to handle the uncertainty in the first draft of a product. The approach does not provide effective precise primitive surfaces, but it is able to introduce regions resembling such a behaviour in a free-form surface, without requiring trimming operations, so allowing more efficient shape alternative evaluations.

Commentary by Dr. Valentin Fuster
2005;():497-507. doi:10.1115/DETC2005-85115.

In this paper, groups of individual features, i.e. a point, a line, and a plane, are called clusters and are used to constrain sufficiently the relative location of adjacent parts. A new mathematical model for representing geometric tolerances is applied to a point-line cluster of features that is used to align adjacent parts in two-dimensional space. First, tolerance-zones are described for the point-line cluster. Then, a Tolerance-Map® , a hypothetical volume of points, is established which is the range of a mapping from all possible locations for the features in the cluster. A picture frame assembly of four parts is used to illustrate the accumulations of manufacturing variations, and the T-Maps provide stackup relations that can be used to allocate size and orientational tolerances. This model is one part of a bi-level model that we are developing for geometric tolerances. At the local level the model deals with the permitted variations in a tolerance zone, while at the global level it interrelates all the frames of reference on a part or assembly.

Commentary by Dr. Valentin Fuster
2005;():509-520. doi:10.1115/DETC2005-85122.

A new math model for geometric tolerances is used to build the frequency distribution for clearance in an assembly of parts, each of which is manufactured to a given set of size and orientation tolerances. The central element of the new math model is the Tolerance-Map® (T-Map® ); it is the range of points resulting from a one-to-one mapping from all the variational possibilities of a feature, within its tolerance-zone, to a specially designed Euclidean point-space. A functional T-Map represents both the acceptable range of 1-D clearance and the acceptable limits to the 3-D variational possibilities of the target face consistent with it. An accumulation T-Map represents all the accumulated 3-D variational possibilities of the target which arise from allowable manufacturing variations on the individual parts in the assembly. The geometric shapes of the accumulation and functional maps are used to compute a measure of all variational possibilities of manufacture of the parts which will give each value of clearance. The measures are then arranged as a probability density function over the acceptable range of clearance, and a beta distribution is fitted to it. The method is applied to two examples.

Commentary by Dr. Valentin Fuster
2005;():521-531. doi:10.1115/DETC2005-85260.

This paper presents a computational method for designing assemblies with a built-in disassembly pathway that maximizes the profit of disassembly while satisfying regulatory requirements for component retrieval. Given component revenues and components to be retrieved, the method simultaneously determines the spatial configurations of components and locator features on the components, such that the product can be disassembled in the most profitable sequence, via a domino-like “self-disassembly” process triggered by the removal of one or a few fasteners. The problem is posed as optimization and a multi-objective genetic algorithm is utilized to search for Pareto-optimal designs in terms of three objectives: 1) the satisfaction of distance specification among components, 2) the efficient use of locator features on components, and 3) the profit of overall disassembly process under the regulatory requirements. A case study with different costs for removing fasteners demonstrates the effectiveness of the method in generating design alternatives under various disassembly scenarios.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():533-542. doi:10.1115/DETC2005-85408.

Rounds and fillets are important design features. We introduce a new point-based method for constant radius rounding and filleting. Based on the mathematical definitions of offsetting operations, discrete offsetting operations are introduced. Steps of our approach are discussed and analyzed. The methodology has been implemented and tested. We present the experimental results on accuracy, memory and running time for various input geometries and radius. Based on the test results, the method is very robust for all kinds of geometries.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():543-553. doi:10.1115/DETC2005-85431.

This paper focuses on efficient automatic recognition algorithms for turning features. As with other domains, recognition of interacting features is a difficult issue, because feature interaction removes faces and alters the topology of the existing turned features. This paper presents a method for efficiently recognizing both isolated (without interaction with other features) and interacting rotational features from geometrical CAD model of mill/turn parts. Additionally, the method recognizes Transient Turned Features (TTFs) that are defined as maximal axisymmetric material volumes from a non-turning feature that can be removed by turning. A TTS may not share any faces with the finished part. First, the rotational faces on a solid model are explored to extract isolated rotational features and some of the interacting ones. Then portions of the 3D model where no rotational faces can be used to recognize turning features are cut out and processed by a novel algorithm for finding their transient turning features.

Commentary by Dr. Valentin Fuster
2005;():555-564. doi:10.1115/DETC2005-85479.

Solving packing problems corresponds to finding the optimal placement of a series of objects in an enclosed space, while satisfying functional requirements. The research presented in this paper proposes a “packing GA” designed especially for the packing problems. It differs from the traditional GA by its encoding method and GA operators, which are tailored for the configuration design problem. In this paper, the detailed implementation and the design principles of the packing GA are presented. To evaluate the effectiveness of the proposed algorithm, a strict definition of acceptable layout is given, so that the performance of the GA can be judged by a more meaningful criterion. The packing GA is tested against two other GAs on an 8 box packing problem. The results show the packing GA has a much better chance to find the global optimum.

Commentary by Dr. Valentin Fuster
2005;():565-575. doi:10.1115/DETC2005-85513.

In this paper we present a simple new algorithm to offset multiple, non-overlapping polygons with arbitrary holes that makes use of winding numbers. Our algorithm constructs an intermediate “raw offset curve” as input to the tessellator routines in the OpenGL Utility library (GLU), which calculates the winding number for each connected region. By construction, the invalid loops of our raw offset curve bound areas with non-positive winding numbers and thus can be removed by using the positive winding rule implemented in the GLU tessellator. The proposed algorithm takes O((n + k)logn) time and O(n + k) space, where n is the number of vertices in the input polygon and k is the number of self-intersections in the raw offset curve. The implementation is extremely simple and reliably produces correct and logically consistent results.

Commentary by Dr. Valentin Fuster
2005;():577-585. doi:10.1115/DETC2005-85528.

Understanding of the shape and size of different features of human body from the scanned data is necessary for automated design and evaluation of product ergonomics. In this paper, a computational framework is presented for automatic detection and recognition of several facial feature-regions from scanned head and shoulder polyhedral models. A noise tolerant methodology is proposed using discrete curvature computations and morphological tools for isolation of the primary feature regions of face namely eye, nose and mouth. Spatial disposition of the critical points of these isolated feature-regions is analyzed for recognition of these critical points as the standard landarks associated with the primary facial features. A number of clinically identified landmarks lie on the facial midline. An efficient algorithm for detection and processing of the midline using a point samplng technique is also presented. The results are matching well with human perception and measurements done manually on the subjects.

Commentary by Dr. Valentin Fuster
2005;():587-597. doi:10.1115/DETC2005-85541.

Development of tolerance analysis methods that are consistent with the ASME and ISO GD&T (geometric dimensioning and tolerancing) standards is a challenging task. Such methods are the basis for creating computer-aided tools for 3D tolerance analysis and assemblability analysis. These tools, along with the others, make it possible to realize virtual manufacturing, in order to shorten lead-time and reduce cost in the product development process. Current simulation tools for 3D tolerance analysis and assemblability analysis are far from satisfactory because the underlying variation algorithms are not fully consistent with the GD&T standards. Better algorithms are still to be developed. Towards that goal, this paper proposes a complete algorithm for 3D slot features and tab features (frequently used in mechanical products) for 3D simulation-based tolerance analysis. The algorithms developed account for bonus/shift tolerances (i.e. effects from material condition specifications), and tolerance zone interaction when multiple tolerances are specified on the same feature. A case study is conducted to demonstrate the algorithm developed. The result from this work is compared with that from 1D tolerance chart method. The comparison study shows quantitatively why 1D tolerance chart method, which is popular in industry, is not sufficient for tolerance analysis, which is 3D in nature.

Commentary by Dr. Valentin Fuster
2005;():599-602. doi:10.1115/DETC2005-84124.

A generative automotive engine CAPP (Computer-Aided-Process Planning) system based on Oracle database was developed. A practical tolerance information communication solution between the CAD and CAPP system was proposed and realized in an independent file schema. A parametrical cylinder head model based on manufacturing feature was created to meet the requirements of the CAPP system feature recognizing. The data structure of the tolerance information in the Pro/ENGINEERING database was presented. A Pro/TOOLKIT program was developed to extract the model’s tolerance information automatically. The tolerance information extracted was then organized with EXPRESS language and imported into the CAPP system database Oracle.

Commentary by Dr. Valentin Fuster
2005;():603-608. doi:10.1115/DETC2005-84274.

A method for cutting non-circular holes on a bent thick plate is proposed. Generally, in order to cut holes on large plates, a special-purpose 5-axis machine is needed. However, such a machine is unavailable in most machine shops. This paper provides a description of a method that utilizes a general-purpose 5-axis water-jet machine in place of the special-purpose machine: First, the bent piece is transformed into a flat plate, where the shape of the holes is reconstructed by considering deformation during bending. Then, after 5-axis NC data is generated, the holes on the flat plate are cut using the 5-axis water-jet machine. In the final step, the desired shape of the piece is obtained by bending the plate with its newly-cut holes. Some illustrations are provided in order to show the validity of the proposed method.

Commentary by Dr. Valentin Fuster
2005;():609-613. doi:10.1115/DETC2005-84367.

The fundamental principle of MR fluid fan clutch in transmitting torque is analyzed, in the meantime, a shear model of MR clutch is proposed; MR fluid fan clutch having simple, novel structure is designed and made; At the same time, on the basis of experiments, the characteristic of velocity regulating of the clutch is studied in detail. The experimental results indicate that, compared with the shearing rate, the change of magnetic fields has a tremendous influence on the speed regulating characteristic of a fan clutch, and output torque of fan clutch can satisfy demand of engine cooling fan.

Topics: Fluids , Design
Commentary by Dr. Valentin Fuster
2005;():615-626. doi:10.1115/DETC2005-84768.

The springback is a significant manufacturing defect in the stamping process. A serious impediment to the use of lighter-weight, higher-strength materials in manufacturing is the relative lack of understanding about how these materials respond to the complex forming process. The springback problem can be reduced by using appropriate designs of die, punch, and blank holder shape together with friction and blank holding force. That is, an optimum stamping process can be determined using a gradient-based optimization to minimize the springback. However, for an effective optimization of the stamping process, development of an efficient analytical design sensitivity analysis method is crucial. In this paper, a continuum-based shape and configuration design sensitivity analysis (DSA) method for the stamping process has been developed. The material derivative concept is used to develop the continuum-based design sensitivity. The design sensitivity equation is solved without iteration at each converged load step in the finite deformation elastoplastic nonlinear analysis with frictional contact, which makes the design sensitivity calculation very efficient. The accuracy and efficiency of the proposed method is illustrated by minimizing springback in an S-rail part, which is often used as an industrial benchmark to verify the numerical procedures employed for stamping processes.

Commentary by Dr. Valentin Fuster
2005;():627-636. doi:10.1115/DETC2005-84794.

The automobile seat must satisfy various safety regulations for the passenger’s safety. In many design practices, each component is independently designed by concentrating on a single regulation. However, since multiple regulations can be involved in a seat component, there may be a design confliction among the various safety regulations. Therefore, a new design methodology is required to effectively design an automobile seat. The axiomatic approach is employed to consider multiple regulations. The Independence Axiom is used to define the overall flow of the seat design. Functional requirements (FRs) are defined by safety regulations and components of the seat are classified into groups which yield design parameters (DPs). The classification is carried out to keep the independence in the FR-DP relationship. Components in the DP group are determined by using the orthogonal arrays of the design of experiments (DOE). Numerical analyses are utilized to evaluate the safety levels by using a commercial software system for nonlinear transient finite element analysis.

Commentary by Dr. Valentin Fuster
2005;():637-646. doi:10.1115/DETC2005-84968.

The paper describes the design optimization of different refractory components used in the continuous casting process. In the first case, an impact pad of a continuous caster tundish is optimized for its turbulence suppression capability, while the inclusion particle trapping of the design is monitored. The impact pad is used in isolation as the only tundish furniture component. In the second case, the Submerged Entry Nozzle (SEN) of the continuous caster mold is optimized for minimum meniscus turbulent kinetic energy (i.e., stable meniscus). In both cases, the design variables are geometrical in nature. The steady-state flow and thermal patterns in the tundish and mold are obtained using the commercial CFD solver FLUENT. In order to perform optimization, the geometries are parameterized and incorporated into a mathematical optimization problem. FLUENT and its pre-processor GAMBIT are linked to a commercial design optimization tool, LS-OPT, to automatically improve the designs using metamodel approximations. The optimization results show a reduction of 12.5% in the turbulence on the slag layer of the tundish, while for the SEN, the results for one design iteration only are shown, due to the high cost of the function evaluations. The final paper will contain additional results. The SEN base and improved designs are validated using water modeling.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():647-653. doi:10.1115/DETC2005-85097.

The methodology presented in this paper is implemented through a tool that integrates the functionality needed to perform accurate CHP market analysis. This tool includes the selection of target market segments and representative buildings, hourly building loads and characteristics, alternative CHP configurations, control rules and equipment management strategies, as well as detailed utility rates, components-based economics and reliability data. Results obtained by using the full capability of this tool are compared with less rigorous screening methods that use average building loads, constant equipment characteristics, and average utility rates. The comparison of results demonstrates that the utilization of the latter methods allows faster market screenings, but generates results that may lead to loss of capital investment, equipment operation and designs that are far from optimal, and erroneous energy policies.

Commentary by Dr. Valentin Fuster
2005;():655-663. doi:10.1115/DETC2005-85455.

Inspection is an important stage in the manufacturing process of machined parts. Coordinate measuring machines (CMM) have become more automatic, programmable, and capable of fulfilling the growing demands of inspection. However, fixturing (datum alignment) of parts is still done manually, consuming valuable inspection time. In this paper, we describe an automated datum alignment technique which integrates a vision system with the CMM to avoid part fixturing. The rough position of the part is estimated through image analysis. This initial reference frame drives the CMM through an automatic datum alignment procedure, thereby automatically establishing the reference frame without the use of fixtures. This technique has been demonstrated for two and a half dimensional (2.5D) machined parts with well-defined features that exhibit a stable position on a flat table.

Commentary by Dr. Valentin Fuster
2005;():665-670. doi:10.1115/DETC2005-85473.

In order to prevent machine tool feed slide system from transient vibrations during operation, machine tool designers usually adopt some typical design solutions; box-in-box typed feed slides, optimizing moving body for minimum weight and dynamic compliance, and so on. Despite all efforts for optimizing design, a feed drive system may experience severe transient vibrations during high-speed operation if its feed-rate control is not suitable. A rough feed-rate curve having discontinuity in its acceleration profile causes serious vibrations in the feed slides system. This paper presents a feed-rate optimization of a ball screw driven machine tool feed slide system for its minimum vibrations. A ball screw feed drive system was mathematically modeled as a 6-degree-of-freedom lumped parameter model. Then, a feed-rate optimization of the system was carried out for minimum vibrations. The main idea of the feed-rate optimization is to find out the most appropriate smooth acceleration profile having jerk continuity. A genetic algorithm, G.A., was used in this feed rate optimization.

Commentary by Dr. Valentin Fuster
2005;():671-680. doi:10.1115/DETC2005-85489.

The effectiveness of using Computer Aided Engineering (CAE) tools to support design decisions is often hindered by the enormous computational demand of complex analysis models, especially when uncertainty is considered. Approximations of analysis models, also known as “metamodels”, are widely used to replace analysis models for optimization under uncertainty. However, due to the inherent nonlinearity in occupant responses during a crash event and relatively large numbers of uncertain variables and responses, naive application of metamodeling techniques can yield misleading results with little or no warning from the algorithms which generate the metamodels. Furthermore, in order to improve the quality of metamodels, a relatively large number of design of experiments (DOE) and comparatively expensive metamodeling techniques, such as Kriging or radial basis function (RBF), are necessary. Thus, sampling-based methods, e.g. Monte Carlo simulations, for obtaining the statistical quantities of system responses during the optimization loop may still be inefficient even for these metamodels. In recent years, analytical uncertainty propagation via metamodels is proposed by Chen et al. 2004, which provides analytical formulation of mean and variance evaluations via a variety of metamodeling techniques to reduce the computational time and improve the convergence behavior of optimization under uncertainty. An occupant restraint system design problem is used as an example to test the applicability of this method.

Commentary by Dr. Valentin Fuster
2005;():681-689. doi:10.1115/DETC2005-85502.

In this paper, some new developments to the packing optimization method based on the rubber band analogy are presented. This method solves packing problems by simulating the physical movements of a set of objects wrapped by a rubber band in the case of two-dimensional problems or by a rubber balloon in the case of three-dimensional problems. The objects are subjected to elastic forces applied by the rubber band to their vertices as well as reaction forces when contacts between objects occur. Based on these forces, objects translate or rotate until maximum compactness is reached. To improve the compactness further, the method is enhanced by adding two new operators: volume relaxation and temporary retraction. These two operators allow temporary volume (elastic energy) increase to get potentially better packing results. The method is implemented and applied for three-dimensional arbitrary shape objects.

Commentary by Dr. Valentin Fuster
2005;():691-698. doi:10.1115/DETC2005-85509.

Vibration attenuation techniques in cutting tools can save old machines and enhance design flexibility in new manufacturing systems. The finite element method is employed to investigate structural stiffness, damping, and switching methodology under the use of smart material in tool error attenuation. This work discusses the limitations of using lumped mass modeling in toolpost dynamic control. Transient solution for tool tip displacement is obtained when pulse width modulation (PWM) is used for smart material activation during the compensation of the radial disturbing cutting forces. Accordingly a Fuzzy algorithm is developed to control actuator voltage level toward improved dynamic performance. The required minimum number of PWM cycles in each disturbing force period is investigated to diminish tool error. Time delay of applied voltage during error attenuation is also evaluated. Toolpost static force-displacement diagram as required to predict voltage intensities for error reduction is tested under different dynamic operating conditions.

Commentary by Dr. Valentin Fuster
2005;():699-708. doi:10.1115/DETC2005-84079.

Fibre Metal Laminates (FML) are a member of the hybrid materials family, consisting of alternating metal layers and layers of fibres embedded in a resin. Improved damage resistance and tolerance result in a significant weight and maintenance cost reduction compared to aluminium. FML also give the aircraft engineer additional design freedom, such as local tailoring of laminate properties. However, experience has shown that FML’s provide the aircraft manufacturer with many challenges as well. With increasing complexity of the structure, requirements from different disciplines within the engineering process will start to interfere, resulting in conflicts. This article discusses the current engineering process of FML fuselage panels as applied at Stork/Fokker Aerospace (FAESP). A case study is presented, clarifying the current design process and the way requirements start to interfere during the engineering process. A new approach based on Knowledge Engineering is discussed, implementing knowledge from engineers from all disciplines in an early stage of the design process. An automated design approach for FML fuselage panels is presented, using the same design parameters as the current approach. Because of the high complexity of the design, requirements start to conflict. Fulfilling all requirements with a traditional engineering approach results in an iterative and time consuming process. Automation of the design process, integrating knowledge and requirements from all disciplines, results in a fast and transparent design approach.

Commentary by Dr. Valentin Fuster
2005;():709-718. doi:10.1115/DETC2005-84869.

A flexible information model for systematic development and deployment of product families during all phases of the product realization process is crucial for product-oriented organizations. In this paper we propose a unified information model to capture, share, and organize product design contents, concepts, and contexts across different phases of the product realization process using a web ontology language (OWL) representation. Representing product families by preconceived common ontologies shows promise in promoting component sharing while facilitating search and exploration of design information over various phases and spanning multiple products in a family. Three distinct types of design information, namely, (1) customer needs, (2) product functions, and (3) product components captured during different phases of the product realization process, are considered in this paper to demonstrate the proposed information model. Product vector and function component mapping matrices along with the common ontologies are utilized for designer-initiated information exploration and aggregation. As a demonstration, six products from a family of power tools are represented in OWL DL (Description Logic) format, capturing distinct information needed during the various phases of product realization.

Commentary by Dr. Valentin Fuster
2005;():719-727. doi:10.1115/DETC2005-85284.

The product development process is a series of asynchronous process steps in which the geometry, the materials, and the manufacturing processes are defined to meet the performance and cost requirements. During the product design, information about a product is initially sparse and becomes more detailed as the process matures. In this paper, we apply a systems complexity analysis methodology to track the evolution of information complexity for several design process workflows. We used a frame-slot based model to store parametric design information, defined the size and link complexity measures for the design information and tracked the evolution of the knowledge-base complexity throughout the design process. Product design through injection molding is taken as an example to illustrate the utility of our approach and the static and dynamic aspects of the complexity of design information are analyzed.

Commentary by Dr. Valentin Fuster
2005;():729-742. doi:10.1115/DETC2005-85686.

This work explains the development of an integrated modeler, which is applied in the design-to-manufacturing stages of manufacturing processes namely machining, sheet metal processing and forging. Its system architecture is broadly divided into four modules namely, Feature Based Design (FBD), Virtual Factory Environment (VFE), Process Based Feature Mapping (PBFM) and Process Planning (PP). Feature based design is used for the design, modeling, synthesis, representation and validation of the components for manufacturing applications. New set of features namely integrated features are pre-defined as feature templates and instanced to get / derive the information required for the design-to-manufacturing stages of the components. VFE defines the factory, which provides the database for operations, machines, cutting tools, work pieces etc. The knowledge base of the developed system maps validated features of the component into operation sets in the first phase of the PBFM. Each operation in the operation sets can be executed using different machines and tools in a factory. All these possible choices are obtained in the second phase of PBFM. Genetic algorithm is used to find the optimal sequence of operations, machines and tools for different criteria in the process planning stage. This paper explains the developed system with case studies.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2005;():743-757. doi:10.1115/DETC2005-85041.

The process of constructing computationally benign approximations of expensive computer simulation codes, or metamodeling, is a critical component of several large-scale Multidisciplinary Design Optimization approaches. Such applications typically involve complex models, such as finite elements, computational fluid dynamics, or chemical processes. The decision regarding the most appropriate metamodeling approach usually depends on the type of application. However, several newly-proposed kernel-based metamodeling approaches can provide consistently accurate performance for a wide variety of applications. The authors recently proposed one such novel and effective metamodeling approach — the Extended Radial Basis Function approach — and reported encouraging results. To further understand the advantages and limitations of this new approach, we compare its performance to that of the typical radial basis function approach, and another closely related method — kriging. Several test functions with varying problem dimensions and degrees of nonlinearity are used to compare the accuracies of the metamodels using these metamodeling approaches. We consider several performance criteria, such as metamodel accuracy. effect of sampling technique, effect of problem dimension, and computational complexity. The results suggest that the E-RBF approach is a potentially powerful metamodeling approach for MDO-based applications.

Topics: Functions
Commentary by Dr. Valentin Fuster
2005;():759-771. doi:10.1115/DETC2005-85043.

Metamodels are becoming increasingly popular for representing unknown black box functions. Several metamodel classes exist, including response surfaces and spline-based models, kriging and radial basis function models, and neural networks. For an inexperienced user, selecting an appropriate metamodel is difficult due to a limited understanding of the advantages and disadvantages of each metamodel type. This paper reviews several major metamodeling techniques with respect to their advantages and disadvantages and compares several significant metamodel types for use as a black box metamodeling tool. The results make a strong case for using Non-Uniform Rational B-spline (NURBs) HyPerModels as a generic metamodeling tool.

Commentary by Dr. Valentin Fuster
2005;():773-783. doi:10.1115/DETC2005-85146.

This paper presents a new method to construct response surface function and a new hybrid optimization method. For the response surface function, the radial basis function is used for a zeroth-order approximation, while new bases is proposed for the moving least squares method for a first-order approximation. For the new hybrid optimization method, the gradient-based algorithm and pattern search algorithm are integrated for robust and efficient optimization process. These methods are based on: (1) multi-point approximations of the objective and constraint functions; (2) a multi-quadric radial basis function for the zeroth-order function representation or radial basis function plus polynomial based moving least squares approximation for the first-order function approximation; and (3) a pattern search algorithm to impose a descent condition. Several numerical examples are presented to illustrate the accuracy and computational efficiency of the proposed method for both function approximation and design optimization. The examples for function approximation indicate that the multi-quadric radial basis function and the proposed radial basis function plus polynomial based moving least squares method can yield accurate estimates of arbitrary multivariate functions. Results also show that the hybrid method developed provides efficient and convergent solutions to both mathematical and structural optimization problems.

Commentary by Dr. Valentin Fuster
2005;():785-798. doi:10.1115/DETC2005-85406.

Probabilistic design in complex design spaces is often a computationally expensive and difficult task because of the highly nonlinear and noisy nature of those spaces. Approximate probabilistic methods, such as, First-Order Second-Moments (FOSM) and Point Estimate Method (PEM) have been developed to alleviate the high computational cost issue. However, both methods have difficulty with non-monotonic spaces and FOSM may have convergence problems if noise on the space makes it difficult to calculate accurate numerical partial derivatives. Use of design and Analysis of Computer Experiments (DACE) methods to build polynomial meta-models is a common approach which both smoothes the design space and significantly improves the computational efficiency. However, this type of model is inherently limited by the properties of the polynomial function and its transformations. Therefore, polynomial meta-models may not accurately represent the portion of the design space that is of interest to the engineer. The objective of this paper is to utilize Gaussian Process (GP) techniques to build an alternative meta-model that retains the properties of smoothness and fast execution but has a much higher level of accuracy. If available, this high quality GP model can then be used for fast probabilistic analysis based on a function that much more closely represents the original design space. Achieving the GP goal of a highly accurate meta-model requires a level of mathematics that is much more complex than the mathematics required for regular linear and quadratic response surfaces. Many difficult mathematical issues encountered in the implementation of the Gaussian Process meta-model are addressed in this paper. Several selected examples demonstrate the accuracy of the GP models and efficiency improvements related to probabilistic design.

Commentary by Dr. Valentin Fuster
2005;():799-806. doi:10.1115/DETC2005-85469.

This study presents a compromise approach to augmentation of response surface (RS) designs to achieve the desired level of accuracy. RS are frequently used as surrogate models in multidisciplinary design optimization of complex mechanical systems. Augmentation is necessitated by the high computational expense typically associated with each function evaluation. As a result previous results from lower fidelity models are incorporated into the higher fidelity RS designs. The compromise approach yields higher quality parametric polynomial response surface approximations than traditional augmentation. Based on the D-optimality criterion as a measure of RS design quality, the method simultaneously considers several polynomial models during the RS design, resulting in good quality designs for all models under consideration, as opposed to good quality designs only for lower order models as in the case of traditional augmentation. Several numerical and an engineering example are presented to illustrate the efficacy of the approach.

Topics: Design , Polynomials
Commentary by Dr. Valentin Fuster
2005;():807-821. doi:10.1115/DETC2005-85061.

A paradigm shift is underway in which the classical materials selection approach in engineering design is being replaced by the design of material structure and processing paths on a hierarchy of length scales for multifunctional performance requirements. In this paper, the focus is on designing mesoscopic material topology—the spatial arrangement of solid phases and voids on length scales larger than microstructures but smaller than the characteristic dimensions of an overall product. A robust topology design method is presented for designing materials on mesoscopic scales by topologically and parametrically tailoring them to achieve properties that are superior to those of standard or heuristic designs, customized for large-scale applications, and less sensitive to imperfections in the material. Imperfections are observed regularly in cellular material mesostructure and other classes of materials because of the stochastic nature of process-structure-property relationships. The robust topology design method allows us to consider imperfections explicitly in a materials design process. As part of the method, guidelines are established for modeling dimensional and topological imperfections, such as tolerances and cracked cell walls, as deviations from intended material structure. Also, as part of the method, robust topology design problems are formulated as compromise Decision Support Problems, and local Taylor-series approximations and strategic experimentation techniques are established for evaluating the impact of dimensional and topological imperfections, respectively, on material properties. Key aspects of the approach are demonstrated by designing ordered, prismatic cellular materials with customized elastic properties that are robust to dimensional tolerances and topological imperfections.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():823-834. doi:10.1115/DETC2005-85148.

Through the use of generalized spherical harmonic basis functions a spectral representation is used to model the microstructure of cubic materials. This model is then linked to the macroscopic elastic properties of materials with Cubic Triclinic and Cubic Axial-symmetric symmetry. The influence that elastic anisotropy has on the fatigue response of the material is then quantified. This is accomplished through using the effective elastic stiffness tensor in the computation of the crack extension force, G. The resulting material model and macroscopic property calculations are the foundation for a software package which provides an interface to the microstructure. The Microstructure Sensitive Design interface (MDSi) enables interaction with the material design process and provides tools needed to incorporate material parameters with traditional design, optimization, and analysis software. The microstructure of the material can then be optimized concurrently other engineering models to increase the overall design space. The influence of microstructure on the performance of a spinning disc is explored. The additional design space afforded by inclusion of the material parameters show that for both Cubic Triclinic and Cubic Axial-symmetric material symmetry conditions G can be reduced by more than an order of magnitude. For the Cubic Axial-symmetric condition a Cube <001> fiber texture and a <111> fiber texture are identified as the best performing orientation distributions.

Commentary by Dr. Valentin Fuster
2005;():835-843. doi:10.1115/DETC2005-85290.

Multi-material structures take advantage of beneficial properties of different materials to achieve an increased level of functionality. In an effort to reduce the weight of vehicle components such as brake disk rotors, which are generally made of cast iron, light materials such as aluminum alloys may be used. These materials, however, may lead to unacceptable temperature levels. Alternatively, functionally graded structures may offer a significant decrease in weight without altering thermal performance. The design of such structures is not trivial and is the focus of this paper. The optimization combines a transient heat transfer finite element code with a genetic algorithm. This approach offers the possibility of finding a global optimum in a discrete design space, although this advantage is balanced by high computational expenses due to many finite element analyses. The goal is to design a brake disk rotor for minimum weight and optimal thermal behavior using two different materials. Knowing that computational time can quickly become prohibitively high, strategies, such as finite element grouping to reduce the number of design variables and local mesh refinement, must be employed to efficiently solve the design problem. This paper discussed the strengths and weaknesses of the proposed design method.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():845-857. doi:10.1115/DETC2005-85316.

Simulation Based Engineering Science (SBES) is an evolving interdisciplinary research area rooted in the methods for modeling multiscale, multi-physics events. The objective in SBES is to develop methodologies that are foundational to designing multiscale systems by accounting for phenomena at multiple scales of lengths and time. Some of the key challenges faced in SBES include lack of methods for bridging various time and length scales, management of models and uncertainty associated with them, management of huge amount and variety of information, and methods for efficient decision making based on the available models. Although efforts have been made to address some of these challenges for individual application domains, a domain independent framework for addressing these challenges associated with multiscale problems is not currently available in the literature. In this paper, we make a clear distinction between multiscale modeling and multiscale design. Multiscale modeling deals with efficient integration of information from multiscale models to gain a holistic understanding of the system, whereas multiscale design deals with efficient utilization of information to satisfy design objectives. Our focus in this paper is on multiscale design. In order to address the challenges associated with multiscale design, we propose a domain independent strategy that is based on understanding the generic interaction patterns between models at multiple scales. The design strategy outlined in this paper has as its foundation a systems-based approach for designing design processes (meta-design) and robust design. The concepts are illustrated with a multiscale design problem from materials domain.

Commentary by Dr. Valentin Fuster
2005;():859-870. doi:10.1115/DETC2005-85335.

In this paper, we propose an Inductive Design Exploration Method (IDEM) which can be used to design materials and products concurrently and systematically. IDEM facilitates hierarchical materials and product design synthesis, which includes multi-scale material structure and product analysis chains, and uncertainty in models and its propagation through the chains. In this method, we sequentially identify a ranged set of feasible specifications, instead of an optimal point solution in each segment of a hierarchical design process. The feasible spaces are searched from top-level design requirements to product and materials specifications taking into account propagated uncertainty. Strategies for parallelizing computations and achieving a robust solution for uncertainty in models are also addressed. The method is demonstrated with a simple example of designing a clay-filled polyethylene cantilever beam.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():871-879. doi:10.1115/DETC2005-84751.

This paper describes a generalized Cahn-Hilliard model for the topology optimization of multi-material structure. Unlike the traditional Cahn-Hilliard model applied to spinodal separation which only has bulk energy and interface energy, the generalized model couples the elastic energy into the total free energy. As a result, the morphology of the small phase domain during phase separation and grain coarsening process is not random islands and zigzag web-like objects but regular truss structure. Although disturbed by elastic energy, the Cahn-Hilliard system still keeps its two most important properties: energy dissipation and mass conservation. Therefore, it is unnecessary to compute the Lagrange multipliers for the volume constraints and make great effort to minimize the elastic energy for the optimization of structural topology. Furthermore, this model also makes the simple interpolation of stiffness tensors reasonable for multi-material structure in real simulation. To resolve these fourth-order nonlinear parabolic Cahn-Hilliard equations coupled with elastic energy, we developed a powerful mutigrid algorithm. Finally, we demonstrate that this new method is effective in optimizing the topology of multi-material structure through several 2-D examples.

Commentary by Dr. Valentin Fuster
2005;():881-887. doi:10.1115/DETC2005-84761.

Towards the goal of developing a new methodology for control of vibration in flexible structures, this paper introduces the concept of modal disparity and addresses the topology optimization problem for maximizing the disparity. The modal disparity in a structure is generated by the application of forces that vary the stiffness of the structure and a topology optimization problem determines the best locations for application of these forces. When the forces are switched on and off and, as a result, the structure is switched between two stiffness states, modal disparity results in vibration energy being transferred from a set of uncontrolled modes to a set of controlled modes. This allows the vibration of the structure to be completely attenuated by removing energy from the small set of controlled modes. Simulation results are presented to demonstrate control of vibration in two truss-like structures exploiting modal disparity.

Commentary by Dr. Valentin Fuster
2005;():889-898. doi:10.1115/DETC2005-84904.

Formulations for the automatic synthesis of two-dimensional bistable, compliant periodic structures are presented, based on standard methods for topology optimization. The design space is parameterized using non-linear beam elements and a ground structure approach. A performance criterion is suggested, based on characteristics of the load-deformation curve of the compliant structure. A genetic algorithm is used to find candidate solutions. A numerical implementation of this methodology is discussed and illustrated using a simple example.

Commentary by Dr. Valentin Fuster
2005;():899-907. doi:10.1115/DETC2005-84965.

This paper presents a new method for designing vehicle structures for crashworthiness using surrogate models and a genetic algorithm. Inspired by the classifier ensemble approaches in pattern recognition, the method estimates the crash performance of a candidate design based on an ensemble of surrogate models constructed from the different sets of samples of finite element analyses. Multiple sub-populations of candidate designs are evolved, in a co-evolutionary fashion, to minimize the different aggregates of the outputs of the surrogate models in the ensemble, as well as the raw output of each surrogate. With the same sample size of finite element analyses, it is expected the method can provide wider ranges potentially high-performance designs than the conventional methods that employ a single surrogate model, by effectively compensating the errors associated with individual surrogate models. Two case studies on simplified and full vehicle models subject to full-overlap frontal crash conditions are presented for demonstration.

Commentary by Dr. Valentin Fuster
2005;():909-919. doi:10.1115/DETC2005-85176.

This paper discuses a new topology optimization method using frame elements for the design of mechanical structures at the conceptual design phase. The optimal configurations are determined by maximizing multiple eigen-frequencies in order to obtain the most stable structures for dynamic problems. The optimization problem is formulated using frame elements having ellipsoidal cross-sections, as the simplest case. Construction of the optimization procedure is based on CONLIN and the complementary strain energy concept. Finally, several examples are presented to confirm that the proposed method is useful for the topology optimization method discussed here.

Commentary by Dr. Valentin Fuster
2005;():921-930. doi:10.1115/DETC2005-85518.

With the advent of robots in modern-day manufacturing workcells, optimization of robotic workcell layout (RWL) is crucial in ensuring the minimization of the production cycle time. Although RWL share many aspects with the well-known facility layout problem (FLP), there are features which set the RWL apart. However, the common features which they share enable approaches in FLP to be ported over to RWL. One heuristic gaining popularity is genetic algorithm (GA). In this paper, we present a GA approach to optimizing RWL by using the distance covered by the robot arm as a means of gauging the degree of optimization. The approach is constructive: the different stations within the workcell are placed one by one in the development of the layout. The placement method adopted is based on the spiral placement method first broached by Islier (1998). The algorithm was implemented in Visual C++ and a case study assessed its performance.

Commentary by Dr. Valentin Fuster
2005;():931-937. doi:10.1115/DETC2005-85587.

A mechanism is a device transmits motion in a predetermined manner in order to accomplish specific objectives. Mechanism design can be divided into three steps: type synthesis, number synthesis and dimensional synthesis, where the number synthesis is also called topological synthesis. In this paper, a new approach for topological synthesis and dimensional synthesis of linkage mechanism design with pin joints is presented. This approach is based on the discrete element approach which always provides clear definitions of number of linkages and joints. In order to extend its applications beyond the compliant mechanism, a novel analysis method based on the principle of minimum potential energy for linkage topology optimization is employed. Unlike the traditional FEM based approaches, this novel analysis method can be applied to multiple joint linkage designs directly. Genetic Algorithm is chosen as the optimizer. Finally, a few design examples from the proposed method are presented.

Commentary by Dr. Valentin Fuster
2005;():939-945. doi:10.1115/DETC2005-85605.

In this paper, topology optimization of structure subject to design-dependent loads is studied. The position and direction of the design-dependent loads will change as the shape and topology of structure changes during optimization iteration. A potential function is introduced to locate the surface boundary. Design sensitivity analysis is derived. Examples from the proposed method are presented.

Commentary by Dr. Valentin Fuster
2005;():947-957. doi:10.1115/DETC2005-84454.

The field of new product development has a number of difficult challenges with which it must contend: shortened production time, greater market share demand, and geographically dispersed teams. Several software systems have been developed to ease these challenges. A representative cross-section of work in the fields of document management, project management, product lifecycle management, and conceptual and family design is examined, including past and current academic work and commercially available software. The scope and features of these projects are examined and compared on a software taxonomy. The potential application of these systems to product families is discussed throughout.

Commentary by Dr. Valentin Fuster
2005;():959-968. doi:10.1115/DETC2005-84817.

As the marketplace is changing so rapidly, it becomes a key issue for companies to best meet customers’ diverse demands by providing a variety of products in a cost-effective and timely manner. In the meantime, an increasing variety of capability and functionality of products has made it more difficult for companies that develop only one product at a time to maintain competitive production costs and reclaim market share. By designing a product family based on a robust product platform, overall production cost can be more competitive than competitors selling one product at a time while delivering highly differentiated products. In order to design cost-effective product families and product platforms, we are developing a production cost estimation framework in which relevant costs are collected, estimated, and analyzed. Since the framework is quite broad, this paper is dedicated to refining the estimation framework in a practical way by developing an activity-based costing (ABC) system in which activity costs are mapped to individual parts in the product family, which is called cost modularization, and the activity costs affected by product family design decisions are reconstructed to make the costs relevant to these decisions. A case study involving a family of power tools is used to demonstrate the proposed use of the ABC system.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():969-978. doi:10.1115/DETC2005-84818.

In this paper we propose a framework based on Formal Concept Analysis (FCA) that can be applied systematically to (1) visualize a product family (PF) and (2) improve commonality in the product family. Within this framework, the components of a PF are represented as a complete lattice structure using FCA. A Hasse diagram composed of the lattice structure graphically represents all the products, components, and the relationships between products and components in the PF. The lattice structure is then analyzed to identify prospective components to redesign to improve commonality. We propose two approaches as part of this PF redesign methodology: (1) Component-Based approach, and (2) Product-Based approach. In the Component-Based approach, emphasis is given to a single component that could be shared among the products in a PF to increase commonality. In the Product-Based approach, multiple products from a PF are selected, and commonality is improved among the selected products. Various commonality indices are used to assess the degree of commonality within a PF during its redesign. In this paper, we apply the framework to represent and redesign a family of one-time-use cameras. Besides increasing the understanding of the interaction between components in a PF, the framework explicitly captures the redesign process for improving commonality using FCA.

Commentary by Dr. Valentin Fuster
2005;():979-988. doi:10.1115/DETC2005-84888.

This paper presents our continued research efforts towards developing a decomposition-based solution approach for rapid computational redesign to support agile manufacturing of evolutionary products. By analogy to the practices used for physical machines, the proposed approach involves two general steps: diagnosis and repair. This paper focuses on the diagnosis step. for which a two-phase decomposition method is developed. The first phase, called design dependency analysis, systematizes and reorganizes the intrinsic coupling structure of the existing design model by analyzing and reordering the design dependency matrix (DDM) used to represent the functional dependence and couplings inherent in the design model. The second phase, called redesign partitioning analysis, uses this result to generate alternative redesign pattern solutions through a three-stage procedure. Each pattern solution delimits the portions of the design model that need to be re-computed. An example problem concerning the redesign of an automobile powertrain is used for method illustration. Our seed paper has presented a method for selecting the optimal redesign pattern solution from the alternatives generated through redesign partitioning analysis, and a sequel paper will discuss how to generate a corresponding re-computation strategy and redesign plan (redesign shortcut roadmap).

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():989-998. doi:10.1115/DETC2005-84890.

We have developed a decomposition-based rapid redesign methodology for large, complex computational redesign problems. While the overall methodology consists of two general steps: diagnosis and repair, this paper focuses on the repair step in which decomposition patterns are utilized for redesign planning. Resulting from design diagnosis, a typical decomposition pattern solution to a given redesign problem indicates the portions of the design model necessary for re-computation as well as the interaction part within the model accountable for design change propagation. Following this, this paper suggests repair actions with an approach derived from an input pattern solution, to generate a redesign roadmap allowing for taking a shortcut in the redesign solution process while scheduling re-computing tasks. To do so, a complete collection of re-computation strategies able to handle all possible decomposition patterns for any given redesign problem is introduced, and a two-stage redesign planning approach from re-computation strategy selection to redesign roadmap generation is proposed. An example problem concerning the redesign of a relief valve is used for illustration and validation.

Commentary by Dr. Valentin Fuster
2005;():999-1008. doi:10.1115/DETC2005-84905.

Many companies are using product families and platform-based product development to reduce costs and time-to-market while increasing product variety and customization. Multi-objective optimization is increasingly becoming a powerful tool to support product platform and product family design. In this paper, a genetic algorithm-based optimization method for product family design is suggested, and its application is demonstrated using a family of universal electric motors. Using an appropriate representation for the design variables and by adopting a suitable formulation for the genetic algorithm, a one-stage approach for product family design can be realized that requires no a priori platform decision-making, eliminating the need for higher-level problem-specific domain knowledge. Optimizing product platforms using multi-objective algorithms gives the designer a Pareto solution set, which can be used to make better decisions based on the trade-offs present across different objectives. Two Non-Dominated Sorting Genetic Algorithms, namely, NSGA-II and ε-NSGA-II, are described, and their performance is compared. Implementation challenges associated with the use of these algorithms are also discussed. Comparison of the results with existing benchmark designs suggests that the proposed multi-objective genetic algorithms perform better than conventional single-objective optimization techniques, while providing designers with more information to support decision making during product family design.

Commentary by Dr. Valentin Fuster
2005;():1009-1018. doi:10.1115/DETC2005-84927.

Many of today’s manufacturing companies are using platform-based product development to realize families of products with sufficient variety to meet customers’ demands while keeping costs relatively low. The challenge when designing or redesigning a product family is in resolving the tradeoff between product commonality and distinctiveness. Several methodologies have been proposed to redesign existing product families; however, a problem with most of these methods is that they require a considerable amount of information that is not often readily available, and hence their use has been limited. In this research, we propose a methodology to help designers during product family redesign. This methodology is based on the use of a genetic algorithm and commonality indices - metrics to assess the level of commonality within a product family. Unlike most other research in which the redesign of a product family is the result of many human computations, the proposed methodology reduces human intervention and improves accuracy, repeatability, and robustness of the results. Moreover, it is based on data that is relatively easy to acquire. As an example, a family of computer mice is analyzed using the Product Line Commonality Index. Recommendations are given at the product family level (assessment of the overall design of the product family), and at the component level (which components to redesign and how to redesign them). The methodology provides a systematic methodology for product family redesign.

Commentary by Dr. Valentin Fuster
2005;():1019-1028. doi:10.1115/DETC2005-85016.

Proper utilization of available assembly resources can reduce the development time and cost for platforms and new product family members. This paper presents a method to explicitly consider existing assembly plant configuration and resources during selection of assembly process for new product family members. In order to perform the trade-off studies, first constraints on the assembly sequence are identified and used to generate the feasible assembly sequence space, which is combinatorial in nature and is enumerated using recursive functions. The assembly sequence design spaces and representation of effects of constraints on these spaces to explicitly represent feasible regions, and efficiently enumerate designs within this space are investigated. A method that stops recursive growth based on constraints and minimization of assembly plant modification cost is used for efficient search of the feasible assembly sequence space. The assembly sequence modification cost is estimated by dividing assembly plant modification tasks into smaller activities and determining cost associated with each activity. Application of the Assembly Resource Utilization Design Module (ARUDM) to determine assembly sequence while increasing utilization of existing assembly plants is demonstrated using a coffeemaker family.

Topics: Manufacturing , Design
Commentary by Dr. Valentin Fuster
2005;():1029-1051. doi:10.1115/DETC2005-85164.

The Product Platform Constructal Theory Method (PPCTM) provides designers with an approach for synthesizing multiple modes of managing customization in the development of product platforms. An assumption underlying PPCTM is that the extent of the market space is known and is fixed. In this paper, we introduce PPCTM-RCM (Robust to Changes in Market) that facilitates designing product platforms when the extent of the market space is expected to change. The PPCTM-RCM is illustrated via example problem, namely, the design of a product platform for a line of customizable pressure vessels. Our focus in this paper is on highlighting features of the method rather than results per se.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():1053-1068. doi:10.1115/DETC2005-85313.

Repository based applications for portfolio design offer the potential for leveraging archived design data with computational searches. Toward the development of such search tools, we present a representation for product portfolios that is an extension of an existing Group Technology (GT) coding scheme. Relevance to portfolio design is treated with a case study example of a hand held grinder design. Results of this work provide a numerical coding representation that captures function, form, material and manufacturing data for systems. This extends the current GT line work by combining these four types of design data and clarifying the use of the functional basis in a GT code. The results serve as a useful starting point for the development of portfolio design algorithms, such as genetic algorithms, that account for this combination of design information.

Commentary by Dr. Valentin Fuster
2005;():1069-1078. doi:10.1115/DETC2005-85336.

Product platform formation has long been considered as an effective method to meet challenges set forth by mass customization. To cater to the changes in customer need driven functional requirements and technological advancements, product platforms have to be robust for a given planning horizon from the manufacturer’s point of view. To date, most of the product platform research is directed towards developing approaches that maximize the usage of common physical structures (such as sub-assemblies and components), amongst product variants. We argue that there is a need to start thinking about platforms at a higher level of abstraction than just at the physical structure level because after all, the physical structures are the end result of the mapping process that starts with the customer needs, cascades to the functional requirements and the behaviors (aka working principle/behavior) that will be used to realize the functions. The Function-Behavior-Structure approach discussed by Gero and Kannengiesser (2003) deals with such an approach. In this paper, we present a methodology called the Function-Behavior Ant Colony Optimization (FB-ACO), to determine a higher abstract level platform at the FB level. The proposed approach can be used to provide critical decisions related to the planning of the advent and egress of a product or the use of a behavior, configuration of the function-behavior platform and the number of such platforms to be considered at a particular time. The FB platform can then be used to develop the detailed design for the family of products under consideration. We demonstrate our proposed approach using the example of a computer mouse product family.

Commentary by Dr. Valentin Fuster
2005;():1079-1089. doi:10.1115/DETC2005-85443.

A new product configuration design method based on extensible product family is presented in this paper. The extensible product family is a multi-layered model with extensible function, extensible principle, and extensible structure. Treating extensible element as a basic unit, the model can be used to associate extensible parts with reusable factors in the range from 0 to 1. The principle of configuration method has been implemented in software. Complicated rule editing and modification are handled by Ch, an embeddable C/C++ interpreter. Designers can establish and edit the configuration rules including formulas dynamically. According to the client requirements and nearest-neighbor matching, the results of the designed configuration can be obtained automatically. Furthermore, the multi-dimensional information about parameters and reusable factors can be displayed and analyzed graphically. If the client requirements or configuration rules are changed, the system can be easily re-configured to obtain designed results based on the new configuration quickly. The system has been successfully deployed and used to design complicated products with a large number of configurations and different specifications such as elevators, machine tools and smut-collectors.

Topics: Design
Commentary by Dr. Valentin Fuster
2005;():1091-1102. doi:10.1115/DETC2005-85559.

In nowadays’ changing manufacturing environment, designing product families based on product platforms has been well accepted as an effective means to fulfill product customization. The current production practice and academic research of platform based product development mostly focus on the design domain, whereas limited attention is paid to how production can take advantage of product families for realizing economy of scale through enormous repetitions. This paper puts forward a concept of process platforms, based on which an efficient and cost saving production configuration for new members of a product family can be achieved. A process platform implies three aspects, including generic representation, generic structures and generic planning. The issues and rationale of production configuration based on a process platform are presented. A multilevel system of nested colored object-oriented Petri Nets with changeable structures is proposed to model the configuration of production processes. To construct a process platform from existing process data, a data mining approach based on text mining and tree matching is introduced to identify the generic process structure of a process family. An industrial example of high variety production of vibration motors for hand phones is also reported.

Commentary by Dr. Valentin Fuster
2005;():1103-1109. doi:10.1115/DETC2005-84179.

One important source of variance in the performance and success of products designed for use by people is the people themselves. In many cases, the acceptability of the design is affected more by the variance in the human users than by the variance attributable to the hardware from which the product is constructed. Consequently, optimization of products used by people may benefit from consideration of human variance through robust design methodologies. We propose that design under uncertainty methodologies can be utilized to generate designs that are robust to variance among users, including differences in age, physical size, strength, and cognitive capability. Including human variance as an inherent part of the product optimization process will improve the overall performance of the product (be it comfort, maintainability, cognitive performance, or other metrics of interest) and could lead to products that are more accessible to broader populations, less expensive, and safer. A case study involving the layout of the interior of a heavy truck cab is presented, focusing on simultaneous placement of the seat and steering wheel adjustment ranges. Tradeoffs between adjustability/cost, driver accommodation, and safety are explored under this paradigm.

Topics: Optimization , Trucks
Commentary by Dr. Valentin Fuster
2005;():1111-1121. doi:10.1115/DETC2005-84489.

Optimal design problems with probabilistic constraints, often referred to as Reliability-Based Design Optimization (RBDO) problems, have been the subject of extensive recent studies. Solution methods to date have focused more on improving efficiency rather than accuracy and the global convergence behavior of the solution. A new strategy utilizing an adaptive sequential linear programming (SLP) algorithm is proposed as a promising approach to balance accuracy, efficiency, and convergence. The strategy transforms the nonlinear probabilistic constraints into equivalent deterministic ones using both first order and second order approximations, and applies a filter-based SLP algorithm to reach the optimum. Simple numerical examples show promise for increased accuracy without sacrificing efficiency.

Commentary by Dr. Valentin Fuster
2005;():1123-1132. doi:10.1115/DETC2005-84495.

This work proposes a novel concept of failure surface frontier (FSF), which is a hyper-surface consisting of the set of the non-dominated failure points on the limit states of a given failure region. FSF better represents the limit state functions for reliability assessment than conventional linear or quadratic approximations on the most probable point (MPP). Assumptions, definitions, and benefits of FSF are discussed first in detail. Then, a discriminative sampling based algorithm was proposed to identify FSF, from which reliability is assessed. Test results on well known problems show that reliability can be accurately estimated with high efficiency. The algorithm is also effective for problems of multiple failure regions, multiple most probable points (MPP), or failure regions of extremely small probability.

Topics: Reliability , Failure
Commentary by Dr. Valentin Fuster
2005;():1133-1142. doi:10.1115/DETC2005-84514.

Engineering design problems frequently involve a mix of both continuous and discrete uncertainties. However, most methods in the literature deal with either continuous or discrete uncertainties, but not both. In particular, no method has yet addressed uncertainty for categorically discrete variables or parameters. This article develops an efficient optimization method for problems involving mixed continuous-discrete uncertainties. The method reduces the number of function evaluations performed by systematically filtering the discrete factorials used for estimating reliability based on their importance. This importance is assessed using the spatial distance from the feasible boundary and the probability of the discrete components. The method is demonstrated in examples and is shown to be very efficient with only small errors.

Commentary by Dr. Valentin Fuster
2005;():1143-1152. doi:10.1115/DETC2005-84523.

Uncertainty analysis, which assesses the impact of the uncertainty of input variables on responses, is an indispensable component in engineering design under uncertainty such as reliability based design and robust design. However, uncertainty analysis is not an affordable computational burden in many engineering problems. In this paper, a new uncertainty analysis method is proposed with the purpose of accurately and efficiently estimating the cumulative distribution function (CDF), probability density function (PDF) and statistical moments of a response given the distributions of input variables. The bivariate dimension-reduction method and numerical integration are used to calculate the moments of the response; then Saddlepoint Approximations are employed to estimate the CDF and PDF of the response. The proposed method requires neither the derivatives of the response nor the search of the Most Probable Point (MPP), which is needed in the commonly used First - or Second - Order Reliability Methods (FORM or SORM). The efficiency and accuracy of the proposed method is illustrated with three example problems. The method is more accurate and efficient for estimating the full range of the distribution of a response than FORM and SORM. This method provides results as accurate as Monte Carlo simulation, with a significantly reduced computational effort.

Commentary by Dr. Valentin Fuster
2005;():1153-1161. doi:10.1115/DETC2005-84693.

Early in the engineering design cycle, it is difficult to quantify product reliability or compliance to performance targets due to insufficient data or information to model uncertainties. Probability theory can not be therefore, used. Design decisions are usually, based on fuzzy information that is vague, imprecise qualitative, linguistic or incomplete. Recently, evidence theory has been proposed to handle uncertainty with limited information as an alternative to probability theory. In this paper, a computationally efficient design optimization method is proposed based on evidence theory, which can handle a mixture of epistemic and random uncertainties. It quickly identifies the vicinity of the optimal point and the active constraints by moving a hyper-ellipse in the original design space, using a reliability-based design optimization (RBDO) algorithm. Subsequently, a derivative-free optimizer calculates the evidence-based optimum, starting from the close-by RBDO optimum, considering only the identified active constraints. The computational cost is kept low by first moving to the vicinity of the optimum quickly and subsequently using local surrogate models of the active constraints only. Two examples demonstrate the proposed evidence-based design optimization method.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():1163-1172. doi:10.1115/DETC2005-84891.

The Sequential Optimization and Reliability Assessment (SORA) method is a single loop method containing a sequence of cycles of decoupled deterministic optimization and reliability assessment for improving the efficiency of probabilistic optimization. However, the original SORA method as well as some other existing single loop methods is not efficient for solving problems with changing variance. In this paper, to enhance the SORA method, three formulations are proposed by taking the effect of changing variance into account. These formulations are distinguished by the different strategies of Inverse Most Probable Point (IMPP) approximation. Mathematical examples and a pressure vessel design problem are used to test and compare the effectiveness of the proposed formulations. The “Direct Linear Estimation Formulation” is shown to be the most effective and efficient approach for dealing with problems with changing variance. The gained insight can be extended and utilized to other optimization strategies that require MPP or IMPP estimations.

Commentary by Dr. Valentin Fuster
2005;():1173-1182. doi:10.1115/DETC2005-84928.

Analytical target cascading (ATC) is a methodology for hierarchical multilevel system design optimization. In previous work, the deterministic ATC formulation was extended to account for uncertainties using a probabilistic approach. Random quantities were represented by their expected values, which were required to match among subproblems to ensure design consistency. In this work, the probabilistic formulation is augmented to allow introduction and matching of additional probabilistic characteristics. Applying robust design principles, a particular probabilistic analytic target cascading (PATC) formulation is proposed by matching the first two moments of random quantities. Several implementation issues are addressed, including representation of probabilistic design targets, matching interrelated responses and linking variables under uncertainty, and coordination strategies for multilevel optimization. Analytical and simulation-based optimal design examples are used to illustrate the new PATC formulation. Design consistency is achieved by matching the first two moments of interrelated responses and linking variables. The effectiveness of the approach is demonstrated by comparing PATC results to those obtained using a probabilistic all-in-one (PAIO) formulation.

Commentary by Dr. Valentin Fuster
2005;():1183-1193. doi:10.1115/DETC2005-84984.

Current design decisions must be made while considering uncertainty in both models and inputs to the design. In most cases this uncertainty is ignored in the hope that it is not important to the decision making process. This paper presents a methodology for managing uncertainty during system-level conceptual design of complex multidisciplinary systems. The methodology is based upon quantifying the information available in computationally expensive subsystem models with more computationally efficient kriging models. By using kriging models, the computational expense of a Monte Carlo simulation to assess the impact of the sources of uncertainty on system-level performance parameters becomes tractable. The use of a kriging model as an approximation to an original computer model introduces model uncertainty, which is included as part of the methodology. The methodology is demonstrated as a decision making tool for the design of a satellite system.

Commentary by Dr. Valentin Fuster
2005;():1195-1204. doi:10.1115/DETC2005-85019.

Mathematical optimization plays an important role in engineering design, leading to greatly improved performance. Deterministic optimization however, may result in undesired choices because it neglects uncertainty. Reliability-based design optimization (RBDO) and robust design can improve optimization by considering uncertainty. This paper proposes an efficient design optimization method under uncertainty, which simultaneously considers reliability and robustness. A mean performance is traded-off against robustness for a given reliability level of all performance targets. This results in a probabilistic multi-objective optimization problem. Variation is expressed in terms of a percentile difference, which is efficiently computed using the Advanced Mean Value (AMV) method. A preference aggregation method converts the multi-objective problem to a single-objective problem, which is then solved using an RBDO approach. Indifference points are used to select the best solution without calculating the entire Pareto frontier. Examples illustrate the concepts and demonstrate their applicability.

Commentary by Dr. Valentin Fuster
2005;():1205-1213. doi:10.1115/DETC2005-85042.

This paper presents an efficient genetic algorithm based methodology for robust design that produces compressor fan blades tolerant against erosion. A novel geometry modeling method is employed to create eroded compressor fan blade sections. A multigrid Reynolds-Averaged Navier Stokes (RANS) solver HYDRA with Spalart Allmaras turbulence model is used for CFD simulations to calculate the pressure losses. This is used in conjunction with Design of Experiment techniques to create Gaussian stochastic process surrogate models to predict the mean and variance of the performance. The Non-dominated Sorting Genetic Algorithm (NSGA-II) is employed for the multiobjective optimization to find the global Pareto-optimal front. This enables the designer to trade off between mean and variance of performance to propose robust designs.

Commentary by Dr. Valentin Fuster
2005;():1215-1224. doi:10.1115/DETC2005-85056.

NASA’s space exploration vehicles, like any other complex engineering system, are susceptible to failure and ultimately loss of mission. Researchers, therefore, have devised a variety of quantitative and qualitative techniques to mitigate risk and uncertainty associated with such low-volume high-cost missions. These techniques are often adopted and implemented by various NASA centers in the form of risk management tools, procedures, or guidelines. Most of these techniques, however, aim at the later stages of the design process or during the operational phase of the mission and therefore, are not applicable to the early stages of design. In particular, since the early conceptual design is often conducted by concurrent engineering teams (and sometimes in distributed environments), most risk management methods cannot effectively capture different types of failure in both subsystem and system levels. The current risk management practice in such environments is mostly ad-hoc and based on asking “what can go wrong?” from the team members. As such, this paper presents a new approach to risk management during the initial phases of concurrent and distributed engineering design. The proposed approach, hereafter referred to as Risk and Uncertainty Based Integrated Concurrent Design (or RUBIC-Design), provides a solid rigor for using functional failure data to guide the design process throughout the design cycle. The new approach is based on the functional model of space exploration systems (or any other mission-critical engineering system for that matter) and has the capability of adjusting in real-time as the overall system evolves throughout the design process. The application of the proposed approach to both single-subsystem and multi-subsystem designs is demonstrated using a satellite reaction wheel example.

Commentary by Dr. Valentin Fuster
2005;():1225-1232. doi:10.1115/DETC2005-85064.

The effect of uncertainty reduction measures on the weight of laminates for cryogenic temperatures is investigated. The uncertainties in the problem are classified as error and variability. Probabilistic design is carried out to analyze the effect of reducing the uncertainty on the weight. For demonstration, variability reduction takes the form of quality control, while error is reduced by including the effect of chemical shrinkage in the analysis. It is found that the use of only error control leads to 12% weight reduction, the use of only quality control leads to 20% weight savings and the use of error and variability control measures together reduces the weight by 37%. In addition, the paper also investigates how to improve the accuracy and efficiency of probability of failure calculations (performed using Monte Carlo simulation technique). Approximating the cumulative distribution functions for strains is shown to lead to more accurate probability of failure estimations than the use of response surface approximations for strains.

Commentary by Dr. Valentin Fuster
2005;():1233-1242. doi:10.1115/DETC2005-85095.

We present a deterministic, non-gradient based approach that uses robustness measures for robust optimization in multi-objective problems where uncontrollable parameters variations cause variation in the objective and constraint values. The approach is applicable for cases with discontinuous objective and constraint functions, and can be used for objective or feasibility robust optimization, or both together. In our approach, the parameter tolerance region maps into sensitivity regions in the objective and constraint spaces. The robustness measures are indices calculated, using an optimizer, from the sizes of the acceptable objective and constraint variation regions and from worst-case estimates of the sensitivity regions’ sizes, resulting in an outer-inner structure. Two examples provide comparisons of the new approach with a similar published approach that is applicable only with continuous functions. Both approaches work well with continuous functions. For discontinuous functions the new approach gives solutions near the nominal Pareto front; the earlier approach does not.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():1243-1251. doi:10.1115/DETC2005-85137.

Since decision-making at the conceptual design stage critically affects final design solutions at the detailed design stage, conceptual design support techniques are practically mandatory if the most efficient realization of optimal designs is desired. Topology optimization methods using discrete elements such as frame elements enable a useful understanding of the underlying mechanics principles of products, however the possibility of changing prior assumptions concerning utilization environments exists since the detailed design process starts after the completion of conceptual design decision-making. In order to avoid product performance reductions due to such later-stage environmental changes, this paper discusses a reliability-based topology optimization method that can secure specified design goals even in the face of environmental factor uncertainty. This method can optimize mechanical structures with respect to two principal characteristics, namely structural stiffness and eigen-frequency. Several examples are provided to illustrate the utility of the method presented here for mechanical design engineers.

Commentary by Dr. Valentin Fuster
2005;():1253-1261. doi:10.1115/DETC2005-85253.

In last two decades, significant attentions have been paid to develop design optimization methodologies under various uncertainties: reliability-based design optimization (RBDO), possibility-based design optimization (PBDO), etc. As a result, a variety of methods of uncertainty-based design optimization have been proposed and are mainly catagorized as: parallel-loop method, serial-loop method, and single-loop method. It has been reported that each method has its own strong and weak points. Thus, this paper attempts to understand various methods better, and proposes to develop an integrated framework for uncertainty-based design optimization. In short, the integrated design framework timely involves three phases (deterministic design optimization, parallel-loop method, single-loop method) to maximize numerical efficiency without losing computational stability and accuracy in the process of uncertainty-based design optimization. While the parallel-loop method maintains numerical stability well with a minimal computation, deterministic design optimization and single-loop method will improve numerical efficiency at the beginning and end of uncertainty-based design optimization. Thus, the proposed method is called adaptive-loop method. It will be shown that integrated framework using the proposed method is applicable for various design optimization methodologies under aleatory or epistemic uncertainties, such as RBDO, PBDO, etc. Examples are used to demonstrate the effectiveness of integrated framework using the adaptive-loop method in terms of numerical efficiency.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2005;():1263-1271. doi:10.1115/DETC2005-85384.

This paper demonstrates the effect of various safety measures used to design aircraft structures for damage tolerance. In addition, it sheds light on the effectiveness of measures like certification tests in improving structural safety. Typically, aircraft are designed with a safety factor of 2 on the service life in addition to other safety measures, such as conservative material properties. This paper demonstrates that small variation in material properties, loading and errors in modeling damage growth can produce large scatter in fatigue life, which means that quality control measures like certification tests are not very effective in reducing failure probability. However, it is shown that the use of machined cracks in certification can substantially increase the effectiveness of certification testing.

Commentary by Dr. Valentin Fuster
2005;():1273-1281. doi:10.1115/DETC2005-85490.

Several methods have been proposed for estimating transmitted variance to enable robust parameter design using computer models. This paper presents an alternative technique based on Gaussian quadrature which requires only 2n+1 or 4n+1 samples (depending on the accuracy desired) where n is the number of randomly varying inputs. The quadrature-based technique is assessed using a hierarchical probability model. The 4n+1 quadrature-based technique can estimate transmitted standard deviation within 5% in over 95% of systems which is much better than the accuracy of Hammersley Sequence Sampling, Latin Hypercube Sampling, and the Quadrature Factorial Method under similar resource constraints. If the most accurate existing method, Hammersley Sequence Sampling, is afforded ten times the number of samples, it provides approximately the same degree of accuracy as the quadrature-based method. Two case studies on robust design confirmed the main conclusions and also suggest the quadrature-based method becomes more accurate as robustness improvements are made.

Commentary by Dr. Valentin Fuster
2005;():1283-1293. doi:10.1115/DETC2005-84457.

The objective of the work presented in this paper is to enable production of large, complex components on rapid prototyping (RP) machines whose build volume is less than the size of the desired component. Such oversized parts can be produced as fabrications if a suitable subdivision can be generated. The methodology presented here creates a decomposition designed for both Rapid Prototyping (DFRP) and assembly (DFA). Any component can be subdivided by an array of orthogonal planes but the shapes resulting from such a brute force approach could have geometries that are difficult to produce accurately on many rapid prototyping systems. Typically, complications will arise when features have insufficient strength to withstand finishing processes or have a cross-section prone to distortion (e.g. warping) during the build process (e.g. thin sections and cusps). Consequently, the method proposed for partitioning considers potential manufacturing problems and modifies the boundaries of individual components where necessary. As part of the decomposition process, the system also generates complimentary male/female (i.e. matching protrusion/depression) assembly features at the interface between the component parts in order to improve the integrity of the final component.

Commentary by Dr. Valentin Fuster
2005;():1295-1300. doi:10.1115/DETC2005-84496.

Interactive design gives engineers the ability to modify the shape of a part and immediately see the changes in the part’s stress state. Virtual reality techniques are utilized to make the process more intuitive and collaborative. The results of a meshless stress analysis are superimposed on the original design. As the engineer modifies the design using subdivision volume free-form deformation, the stress state for the modified design is computed using a Taylor series approximation. When the designer requests a more accurate analysis, a stress re-analysis technique based on the pre-conditioned conjugate gradient method is used with parallel processing to quickly compute an accurate approximation of the stresses for the new design.

Commentary by Dr. Valentin Fuster
2005;():1301-1308. doi:10.1115/DETC2005-84813.

Prototyping plays an important role in industrial product designs. In this paper, for achieving a more intuitive and interactive prototyping, a selective clay milling center is introduced based on a synthesis of clay modeling, 3-Dimensional (3-D) scanning, robot machining and advanced geometric tools. In the system, the product shape design may start either from a physical hand-made clay model or a virtual Computer-Aided Design (CAD) model. Via 3-D scanning techniques, manual modifications of the clay model can be captured in the CAD form and in the meantime, geometric modifications of the CAD model can be fed back to the physical model by an efficient robot machining method, named selective clay milling. This design cycle is repeated until a satisfied prototype iterates. For a better control of the interactions between the manual modeling and robot milling, a 3-D scanning based calibration system has been developed in order to arbitrarily position the workpiece in the design process. With several experiments, the effectiveness of the proposed system is shown and the possible applications of the proposed system in industrial product design are described as well.

Topics: Milling
Commentary by Dr. Valentin Fuster
2005;():1309-1319. doi:10.1115/DETC2005-85163.

Even though building functional metal parts directly from CAD files has been the focus of many researches for many years, parts made by Layered Manufacturing (LM) have limited surface accuracy and long build time due to the sacrificial support structures. The Multi-Axis Laser Aided Manufacturing Process (LAMP) system improves build time by adding two more rotation axes to the system in order to reduce the support structures. The strategy to decompose the part model to sub-volumes or cells and the algorithm to arrange the deposition of those cells are discussed. The problems and questions of how much material should be deposited along the determined directions are also addressed in this paper.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2005;():1321-1331. doi:10.1115/DETC2005-85458.

This paper deals with the problem of automatic fairing of two-parameter B-Spline spherical and spatial motions. The concept of two-parameter freeform motions brings together the notion of the analytically determined two-parameter motions in Theoretical Kinematics and the concept of freeform surfaces in the field of Computer Aided Geometric Design (CAGD). A dual quaternion representation of spatial displacements is used and the problem of fairing two-parameter motions is studied as a surface fairing problem in the space of dual quaternions. By combining the latest results in surface fairing from the field of CAGD and computer aided synthesis of freeform rational motions, smoother (C3 continuous) two-parameter rational B-Spline motions are generated. The results presented in this paper are extensions of previous results on fine-tuning of one-parameter B-spline motions. The problem of motion smoothing has important applications in the Cartesian motion planning, camera motion synthesis, spatial navigation in visualization, and virtual reality systems. Several examples are presented to illustrate the effectiveness of the proposed method.

Topics: Motion , B-splines
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In