0

IN THIS VOLUME


Design Automation

2003;():3-12. doi:10.1115/DETC2003/DAC-48704.

Monte Carlo simulation is commonly employed to evaluate system probability of failure for problems with multiple failure modes in design under uncertainty. The probability calculated from Monte Carlo simulation has random errors due to limited sample size, which create numerical noise in the dependence of the probability on design variables. This in turn may lead the design to spurious optimum. A probabilistic sufficiency factor (PSF) approach is proposed that combines safety factor and probability of failure. The PSF represents a factor of safety relative to a target probability of failure, and it can be calculated from the results of Monte Carlo simulation (MCS) with little extra computation. The paper presents the use of PSF with a design response surface (DRS), which fits it as function of design variables, filtering out the noise in the results of MCS. It is shown that the DRS for the PSF is more accurate than DRS for probability of failure or for safety index. The PSF also provides more information than probability of failure or safety index for the optimization procedure in regions of low probability of failure. Therefore, the convergence of reliability-based optimization is accelerated. The PSF gives a measure of safety that can be used more readily than probability of failure or safety index by designers to estimate the required weight increase to reach a target safety level. To reduce the computational cost of reliability-based design optimization, a variable-fidelity technique and deterministic optimization were combined with probabilistic sufficiency factor approach. Example problems were studied here to demonstrate the methodology.

Commentary by Dr. Valentin Fuster
2003;():13-24. doi:10.1115/DETC2003/DAC-48705.

This paper introduces a novel cross-evaluation matrix (CEM) rating based approach for the development of Pareto efficient frontiers and the finding of discrete sets of globally non-inferior design sets. This work, based on concepts from data envelopment analysis (DEA), can facilitate the enumeration of design candidates in a multi-criteria formulation. In addition, it is expected that the resulting design sets will provide the basis for the establishment of a value system and subsequent preference based rank ordering of expected outcomes in the single-criterion formulation. A unique feature of this cross-evaluation matrix approach is its ability to handle problems without requiring a priori tradeoff formulation or multiattribute model development. As such, its application does not require assignment of a set of a priori weight constants as in many well-established Pareto-optimal generating methods, nor does it need any a priori information of the global minimum or maximum of the attribute functions. Recognizing that the enumeration of multiple discrete solution alternatives can be best achieved in a parallel computation environment, the implementation in this work is executed with the aid of a genetic algorithm strategy. The effectiveness of the integrated approach in yielding Pareto-optimal candidate design sets under different scenarios are studied in the context of illustrative examples, including two engineering case studies, and the results are discussed.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():25-34. doi:10.1115/DETC2003/DAC-48706.

In this work, we propose an integrated framework for probabilistic optimization that can bring both the design objective robustness and the probabilistic constraints into account. The fundamental development of this work is the employment of an inverse reliability strategy that uses percentile performance for assessing both the objective robustness and probabilistic constraints. The percentile formulation for objective robustness provides an accurate probabilistic measure for robustness and more reasonable compound noise combinations. For the probabilistic constraints, compared to a traditional probabilistic model, the proposed formulation is more efficient since it only evaluates the constraint functions at the required reliability levels. The other major development of this work is a new search algorithm for the Most Probable Point of Inverse Reliability (MPPIR) that can be used to efficiently evaluate the performance robustness and percentile performance in the proposed formulation. Multiple techniques are employed in the MPPIR search, including the steepest decent direction and an arc search. The algorithm is applicable to general non-concave and non-convex functions of system performance with random variables following any continuous distributions. The effectiveness of the MPPIR search algorithm is verified using example problems. Overall, an engineering example on integrated robust and reliability design of a vehicle combustion engine piston is used to illustrate the benefits of the proposed method.

Commentary by Dr. Valentin Fuster
2003;():35-44. doi:10.1115/DETC2003/DAC-48707.

In this paper, we investigate and extend a method of selecting among a set of concepts or alternatives using multiple, potentially conflicting criteria. This method, called the Hypothetical Equivalents and Inequivalents Method (HEIM), has been shown to avoid the many pitfalls of already existing methods for such problems, such as pair-wise comparison, ranking methods, rating methods, and weighted sum approaches. The existence of multiple optimal sets of attribute weights based on a set of stated preferences is investigated. Using simple visualization techniques, we show that there is a range of weights that satisfy the constraints of HEIM. Depending on the attribute weights used, multiple possible alternative winners could exist. The visualization techniques, coupled with an indifference point analysis, are then used to understand the robustness of the solution obtained and determine the appropriate additional constraints necessary to identify a single robust optimal alternative.

Commentary by Dr. Valentin Fuster
2003;():45-54. doi:10.1115/DETC2003/DAC-48708.

Reliability analysis methods are commonly used in engineering design, in order to meet reliability and quality measures. An accurate and efficient computational method is presented for reliability analysis of engineering systems at both the component and system levels. The method can easily handle implicit, highly nonlinear limit-state functions, with correlated or non-correlated random variables, which are described by any probabilistic distribution. It is based on a constructed response surface of an indicator function, which determines the “failure” and “safe” regions, according to the performance function. A Monte Carlo simulation (MCS) calculates the probability of failure based on a response surface of the indicator function, instead of the computationally expensive limit-state function. The Cross-Validated Moving Least Squares (CVMLS) method is used to construct the response surface of the indicator function, based on an Optimum Symmetric Latin Hypercube (OSLH) sampling technique. A number of numerical examples highlight the superior accuracy and efficiency of the proposed method over commonly used reliability methods.

Commentary by Dr. Valentin Fuster
2003;():55-62. doi:10.1115/DETC2003/DAC-48709.

In Reliability-Based Design (RBD), uncertainties usually imply for randomness. Nondeterministic variables are assumed to follow certain probability distributions. However, in real engineering applications, some of distributions may not be precisely known or uncertainties associated with some uncertain variables are not from randomness. These nondeterministic variables are only known within intervals. In this paper, a method of RBD with the mixture of random variables with distributions and uncertain variables with intervals is proposed. The reliability is considered under the condition of the worst combination of interval variables. In comparison with traditional RBD, the computational demand of RBD with the mixture of random and interval variables increases dramatically. To alleviate the computational burden, a sequential single-loop procedure is developed to replace the computationally expensive double-loop procedure when the worst case scenario is applied directly. With the proposed method, the RBD is conducted within a series of cycles of deterministic optimization and reliability analysis. The optimization model in each cycle is built based on the Most Probable Point (MPP) and the worst case combination obtained in the reliability analysis in previous cycle. Since the optimization is decoupled from the reliability analysis, the computational amount for MPP search is decreased to the minimum extent. The proposed method is demonstrated with a structural design example.

Topics: Reliability , Design
Commentary by Dr. Valentin Fuster
2003;():63-72. doi:10.1115/DETC2003/DAC-48710.

The use of probabilistic optimization in structural design applications is hindered by the huge computational cost associated with evaluating probabilistic characteristics, where the computationally expensive finite element method (FEM) is often used for simulating design performance. In this paper, a Sequential Optimization and Reliability Assessment (SORA) method with analytical derivatives is applied to improve the efficiency of probabilistic structural optimization. With the SORA method, a single loop strategy that decouples the optimization and the reliability assessment is used to significantly reduce the computational demand of probabilistic optimization. Analytical sensitivities of displacement and stress functionals derived from finite element formulations are incorporated into the probability analysis without recurring excessive cost. The benefits of our proposed methods are demonstrated through two truss design problems by comparing the results with using conventional approaches. Results show that the SORA method with analytical derivatives is the most efficient with satisfactory accuracy.

Commentary by Dr. Valentin Fuster
2003;():73-83. doi:10.1115/DETC2003/DAC-48711.

Mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires an integration of several computer-aided engineering tools and large amount of data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of fatigue life of the mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicate but also extremely expensive, it is a challenging task to develop structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is development of an integrated CAD-based computer-aided engineering process to effectively carry out the design optimization for a structural durability, yielding a durable and cost-effectively manufacturable product. In addition, a reliability analysis is executed to assess the reliability for the deterministic optimal design.

Commentary by Dr. Valentin Fuster
2003;():85-95. doi:10.1115/DETC2003/DAC-48713.

There are two sorts of uncertainty inherent in engineering design, random uncertainty and epistemic uncertainty. Random, or stochastic, uncertainty deals with the randomness or predictability of an event. It is well understood, easily modelled using classical probability, and ideal for such uncertainties as variations in manufacturing processes or material properties. Epistemic uncertainty deals with our lack of knowledge, our lack of information, and our own and others’ subjectivity concerning design parameters. While there are many methods to incorporate random uncertainty in a design process, there are fewer that consider epistemic uncertainty. There are fewer still that attempt to incorporate both sorts of uncertainty, and those that do usually attempt to model both sorts using the same uncertainty model. Two methods, a range method and a fuzzy sets approach, are proposed to achieve designs that are robust to both epistemic uncertainty and random uncertainty. Both methods incorporate preference aggregation methods to achieve more appropriate trade-offs between performance and variability when considering both sorts of uncertainty. The proposed models for epistemic uncertainty are combined with existing models for stochastic uncertainty in a two-step process. An illustrative example incorporating subjectivity concerning design parameters is presented.

Topics: Design , Uncertainty
Commentary by Dr. Valentin Fuster
2003;():97-108. doi:10.1115/DETC2003/DAC-48714.

A robust optimization of an automobile valvetrain is presented where the variation of engine performances due to the component dimensional variations is minimized subject to the constraints on mean engine performances. The dimensional variations of valvetrain components are statistically characterized based on the measurements of the actual components. Monte Carlo simulation is used on a neural network model built from an integrated high fidelity valvetrain-engine model, to obtain the mean and standard deviation of horsepower, torque and fuel consumption. Assuming the component production cost is inversely proportional to the coefficient of variation of its dimensions, a multi-objective optimization problem minimizing the variation in engine performances and the total production cost of components is solved by a multi-objective genetic algorithm (MOGA). The comparisons using the newly developed Pareto front quality index (PFQI) indicate that MOGA generates the Pareto fronts of substantially higher quality, than SQP with varying weights on the objectives. The current design of the valvetrain is compared with two alternative designs on the obtained Pareto front, which suggested potential improvements.

Commentary by Dr. Valentin Fuster
2003;():109-119. doi:10.1115/DETC2003/DAC-48715.

Robust design is a methodology for improving the quality of a product or process by minimizing the effect of variations in the inputs without eliminating the causes of those variations. In robust design, the best design is obtained by solving a multicriteria optimization problem, trading off the nominal performance against the minimization of the variation of the performance measure. Because these methods often combine the two criteria with a weighted sum or another fixed aggregation strategy, which are known to miss Pareto points, they may fail to obtain a desired design. To overcome this inadequacy, a more comprehensive preference aggregation method is combined into robust design. Two examples are presented to illustrate the effectiveness of the proposed method.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():121-130. doi:10.1115/DETC2003/DAC-48716.

We present a method for estimating the parameter sensitivity of a design alternative for use in robust design optimization. The method is non-gradient based: it is applicable even when the objective function of an optimization problem is non-differentiable and/or discontinuous with respect to the parameters. Also, the method does not require a presumed probability distribution for parameters, and is still valid when parameter variations are large. The sensitivity estimate is developed based on the concept that associated with each design alternative there is a region in the parameter variation space whose properties can be used to predict that design’s sensitivity. Our method estimates such a region using a worst-case scenario analysis and uses that estimate in a bi-level robust optimization approach. We present a numerical and an engineering example to demonstrate the applications of our method.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2003;():131-142. doi:10.1115/DETC2003/DAC-48717.

In an effort to improve customization for today’s highly competitive global marketplace, many companies are utilizing product families to increase variety, shorten lead-times, and reduce costs. The key to a successful product family is the product platform from which it is derived either by adding, removing, or substituting one or more modules to the platform or by scaling the platform in one or more dimensions to target specific market niches. This nascent field of engineering design research has matured rapidly in the past decade, and this paper provides an extensive review of the research activity that has occurred during that time to facilitate product platform design and optimization. Techniques for identifying platform leveraging strategies within a product family are reviewed along with optimization-based approaches to help automate the design of a product platform and its corresponding family of products. Examples from both industry and academia are presented throughout the paper to highlight the benefits of platform-based product development, and the paper concludes with a discussion of promising research directions to help bridge the gap between planning and managing families of products and designing and manufacturing them.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2003;():143-155. doi:10.1115/DETC2003/DAC-48718.

This paper discusses the optimal design of common components used for a class of products. While simultaneously designing multiple products has become an important concept in manufacturing in these days, alliances involved in such activities are extended from the traditional form. This means the existence of a chance that an integrator designs a set of components apart from particular products or a supplier commonalizes components independently from integrators. That is, any methodology for simultaneously designing a set of components becomes necessary behind ones for simultaneously designing a set of products. This paper formulates the design problem of common components as an optimization problem, investigates the condition of optimal design through the tradeoff among the level of system-level performance, the number of different components, etc. Then a computational procedure is configured for optimizing the commonalization of components apart from designing a particular set of products by using multivariate analysis, an optimization code based on mini-max operation and a genetic algorithm for constrained nonlinear mathematical programming. Finally the proposed optimization procedure is preliminarily applied to a design problem of liftgate dumpers for passenger cars for demonstrating the meaning of the levels of optimal design and tradeoff structure.

Commentary by Dr. Valentin Fuster
2003;():157-164. doi:10.1115/DETC2003/DAC-48719.

Designing family of products require analysis and evaluation of performance for the entire product family. In the past, products were mainly mass-produced hence the use of CAD/CAE was restricted to developing and analyzing individual products. Since the products offered using a platform approach include a variety of products built upon a common platform, CAD/CAE tools need to be explored further to assist in customization of products according to the customer needs. In this paper we investigate the development of a Product Family FEA (PFFEA) module that can support FEA analysis of user customized product families members. Customer specifications for family members are gathered using the internet, users are allowed to scale and change configurations of products. These specifications are then used to automatically generate 3D solid models of the product and then perform FEA to determine feasibility of the customer specified product. In this paper, development of the PFFEA module is illustrated using a family of lawn trimmer and edger. The PFFEA module uses Pro/E to generate the solid model and ANSYS as the base FEA software.

Commentary by Dr. Valentin Fuster
2003;():165-174. doi:10.1115/DETC2003/DAC-48720.

Product family design involves carefully balancing the commonality of the product platform with the distinctiveness of the individual products in the family. While a variety of optimization methods have been developed to help designers determine the best design variable settings for the product platform and individual products within the family, production costs are thought to be an important criterion to choose the best platform among candidate platform designs. Thus, it is prerequisite to have an appropriate production cost model to be able to estimate the production costs incurred by having common and variant components within a product family. In this paper, we propose a production cost model based on a production cost framework associated with the manufacturing activities. The production cost model can be easily integrated within optimization frameworks to support a Decision-Based Design approach for product family design. As an example, the production cost model is utilized to estimate the production costs of a family of cordless power screwdrivers.

Commentary by Dr. Valentin Fuster
2003;():175-185. doi:10.1115/DETC2003/DAC-48721.

In this paper, a methodology is presented to determine the optimum number of product platforms to maximize overall product family profit with simplifying assumptions. This methodology is attempting to aid various manufacturing industries who are seeking ways to reduce product family manufacturing costs and development times through implementation of platform strategies. The methodology is based on a target market segment analysis, market leader’s performance vs. price position, and a two-level optimization approach for platform and variant designs. The proposed methodology is demonstrated for a hypothetical automotive vehicle family that attempts to serve seven different vehicle market segments. It is found that the use of three distinct platforms maximizes overall profit by pursuing primarily a horizontal leveraging strategy.

Topics: Optimization
Commentary by Dr. Valentin Fuster
2003;():187-195. doi:10.1115/DETC2003/DAC-48722.

Recent advances in rapid prototyping technology make it a useful tool in assessing the early designs of not only individual parts but also assemblies. These rapid assemblies should allow the designers to evaluate the desired functional requirements for the actual fabricated parts. However, the rapid prototyping errors, especially shrinkage, make it difficult to emulate such functional requirements in the prototype. This paper presents an algorithm for the optimal adjustment of the nominal dimensions of rapid prototyped parts to maximize the probability of adherence to the assembly functional requirements. The proposed modification of the nominal dimensions compensates for shrinkage. In addition, the algorithm preserves the general shape of the parts. Real coded genetic algorithms are used to maximize the probability of adhering to those requirements and a truncated Monte Carlo simulation is used to evaluate it. Several examples have been used to demonstrate the developed algorithm and procedures. Guidelines have been presented for the applicability of this adjustment method for various types of fits. The proposed method allows the designers to experience more realistically the intended fit and feel of actual manufactured parts assemblies.

Commentary by Dr. Valentin Fuster
2003;():197-204. doi:10.1115/DETC2003/DAC-48723.

Recent developments in Computer Aided Design (CAD) have drastically reduced overall design cycle time and cost. In this paper, wirePATH, a new method for rapid direct tooling, is presented. By using specialized interactive segmentation computer software and wire electrical discharge machining (wire EDM), wirePATH can reduce manufacturing time and cost for injection molds, casting patterns, and dies. Compared to other conventional-mold making methods, wirePATH can reduce fabrication time by as much as 40 to 70%. Wirepath can use a combination of wire EDM and other processes. Our method provides a new means to produce a greater variety in products by changing only portions of the tooling. Segments allow a part of a mold to be replaced to accommodate design changes and repair. WirePATH enables new applications of wire EDM to more complex shapes by bridging the gaps between CAD, our method, wire EDM and conventional manufacturing processes.

Commentary by Dr. Valentin Fuster
2003;():205-211. doi:10.1115/DETC2003/DAC-48724.

A layered manufacturing technique that uses electrophotography is described where powder is picked up and deposited using a charged photoconducting surface and deposited layer by layer on a build platform. A test bed was designed and constructed to study the application of electrophotography to layered manufacturing. The test bed can precisely deposit powder in the desired shape on each layer. The feasibility of printing powder layer by layer was demonstrated. The electric field required to transfer the powder on to the platform (or onto previously printed layers) was studied. It was found that corona charging the top layer of the part is necessary to continue printing powder as the part height increases.

Commentary by Dr. Valentin Fuster
2003;():213-225. doi:10.1115/DETC2003/DAC-48725.

Multi-Piece molds, which consist of more than two mold pieces, are capable of producing very complex parts—parts that cannot be produced by the traditional molds. The tooling cost is also low for multi-piece molds, which makes it an ideal candidate for pre-production prototyping and bridge tooling. However, designing multi-piece molds is a time-consuming task. This paper describes geometric algorithms for automated design of multi-piece molds. A Multi-Piece Mold Design Algorithm (MPMDA) has been developed to automate several important mold-design steps: finding parting directions, locating parting lines, creating parting surfaces, and constructing mold pieces. MPMDA constructs mold pieces based on global accessibility analysis results of the part and therefore guarantees the disassembly of the mold pieces. A software system has also been developed and successfully tested on several complex industrial parts.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():227-235. doi:10.1115/DETC2003/DAC-48726.

Even though the machining process has been integrated to the Multi-Axis Laser Aided Manufacturing Process (LAMP) System in order to get good surface finish functional parts [1], the quality of parts produced by the LAMP system is still very much dependent upon the choice of deposition paths. [2] Raster motion paths are replaced by offset spiral-like paths, which are discussed in this paper. Most commercial CAD/CAM packages are feature-based, and their use requires the effort and expertise of the user. The shape has to be decomposed into manufacturing features before the software packages can generate the paths. [3] Path planning has long been studied as discussed in this paper. There are still some problems associated with the previous algorithms and also assumptions are usually made. [6, 7, 27] An algorithm for directly generating offset edges, which can be developed to be the deposition paths, is presented in this paper. The skeleton of a layer or a slice of a 3-D CAD drawing is first generated. Based on that skeleton, the offset edges are incrementally constructed. This paper focuses on the characteristics of skeleton and offset edges as well as the construction algorithm for those edges. Simulations are used to verify this method.

Commentary by Dr. Valentin Fuster
2003;():237-245. doi:10.1115/DETC2003/DAC-48727.

Finding effective and interactive tools for extracting freeform shape information continues to be a challenging problem in reverse engineering. Given a freeform shape, it may be constructed by adding one shape, named a pattern, to another. In this paper, an approach of extracting the pattern by template fitting is proposed. By similarity analysis, a user-defined region of interest in the shape can be matched, or fitted, to a shape template. According to the different methods in constructing the shape, several different kinds of R 3 to R 3 functions are defined. With those functions, the original shape is mapped to the fitted shape template, thus the template can be used as a “ruler” to measure the region of interest in the shape. The measuring results, e.g., the extracted pattern, can be generated through an inverse mapping, thus it can be used in the future design. Several implementations were conducted based on ACIS® and OpenGL® in order to verify the proposed method. It is also described how the proposed technique can be applied in practical shape modeling applications.

Topics: Fittings , Shapes
Commentary by Dr. Valentin Fuster
2003;():247-255. doi:10.1115/DETC2003/DAC-48728.

This paper presents a new decomposition method for partitioning complex design problems based on an extended Hierarchical Cluster Analysis (HCA). After a complex design problem is represented using a function-parameter incidence matrix, this new decomposition method allows transforming the originally unorganized matrix into a block-angular form matrix. By means of the resulting matrix, a coordination part and design blocks can be further identified and obtained. In particular, the extended HCA plays an important role in this method, contributive to aligning all non-zero elements, also known as 1s elements, of the matrix along its main diagonal as compactly as possible. As such, a post process, called Partition Point Analysis (PPA), can be further applied to the matrix to finally form the coordination part and the related design blocks, subject to such decomposition criteria as block size and coordination size limits. A powertrain design example is employed for illustration of the decomposition method newly developed.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():257-268. doi:10.1115/DETC2003/DAC-48729.

Achieving the dimensional integrity for a complex structural assembly is a demanding task due to the manufacturing variations of parts and the tolerance relationship between them. While assigning tight tolerances to all parts would solve the problem, an economical solution is taking advantage of small motions that joints allow, such that critical dimensions are adjusted during assembly processes. This paper presents a systematic method that decomposes product geometry at an early stage of design, selects joint types, and generates subassembly partitioning to achieve the adjustment of the critical dimensions during assembly processes. A genetic algorithm (GA) generates candidate assemblies based on a joint library specific for an application domain. Each candidate assembly is evaluated by an internal optimization routine that computes the subassembly partitioning for optimal in-process adjustability, by solving an equivalent minimum cut problem on weighted graphs. A case study on a 3D automotive space frame with the accompanying joint library is presented.

Topics: Manufacturing
Commentary by Dr. Valentin Fuster
2003;():269-281. doi:10.1115/DETC2003/DAC-48730.

A method is presented for synthesizing multi-component structural assemblies with maximum structural performance and manufacturability. The problem is posed as a relaxation of decomposition-based assembly synthesis [1,2,3], where both topology and decomposition of a structure are regarded as variables over a ground structure with non-overlapping beams. A multi-objective genetic algorithm [4,5] with graph-based crossover [6,7,8], coupled with FEM analyses, is used to obtain Pareto optimal solutions to this problem, exhibiting trade-offs among structural stiffness, total weight, component manufacturability (size and simplicity), and the number of joints. Case studies with a cantilever and a simplified automotive floor frame are presented, and representative designs in the Pareto front are examined for the trade-offs among the multiple criteria.

Commentary by Dr. Valentin Fuster
2003;():283-289. doi:10.1115/DETC2003/DAC-48731.

The objective of this paper is to develop techniques to sub-group multiple responses of a system. The proposal is intended for explorative preprocessing steps prior to the execution of a more formal multi-objective optimization. The sub-grouping techniques are developed based on orthonormal expansions. Factor Analysis (FA) and Total Sensitivity Analysis (TSA) are suggested when the relationships among responses are linear and non-linear, respectively. Automotive road Noise Vibration and Harshness (NVH) data is used to illustrate the application of the proposed methodologies.

Commentary by Dr. Valentin Fuster
2003;():291-300. doi:10.1115/DETC2003/DAC-48732.

In the context of concurrent engineering, this paper presents a quite innovative approach to the collaborative optimisation process, which couples a multi-objective genetic algorithm with an asynchronous communication tool. To illustrate this methodology, three European companies’ collaboration on the optimisation of a ship hull is described. Our study demonstrates that when multi-objective optimisation is carried out in a distributed manner it can provide a powerful tool for concurrent product design.

Commentary by Dr. Valentin Fuster
2003;():301-306. doi:10.1115/DETC2003/DAC-48733.

This paper presents a methodology to perform structural topology design optimization for crashworthiness considering a prescribed and safe structural behavior through the dynamic equilibrium equation. This implementation, called here controlled crash behavior, or CCB, is very useful for design engineers in the automotive industry since it allows them to ‘prescribe’ a structural behavior of the vehicle at given locations of interest. The methodology is based on previous work from the author where the optimum topology is determined using a heuristic (optimality) criterion to attain a design with prescribed levels of plastic strains and stresses. The paper includes a simple beam example to demonstrate the CCB approach. Results are consistent with the formulation of the optimization problem.

Commentary by Dr. Valentin Fuster
2003;():307-318. doi:10.1115/DETC2003/DAC-48734.

Engineering design decisions have more value and lasting impact if they are made in the context of the enterprise that produces the designed product. Setting targets that the designer must meet is often done at a high level within the enterprise, with inadequate consideration of the engineering design embodiment and associated cost. For complex artifacts produced by compartmentalized hierarchical enterprises, the challenge of linking the target setting rationale with the product instantiation is particularly demanding. The previously developed analytical target cascading process addresses the problem of translating supersystem design targets into design targets for all systems in a multilevel hierarchically structured product, so that local targets are consistent with each other and allow top targets to be met as closely as possible. In this article the process of rigorously setting the supersystem targets in an enterprise context is explored as a model-based approach termed “analytical target setting.” The effectiveness of linking analytical target setting and cascading is demonstrated in an automotive truck vehicle example.

Commentary by Dr. Valentin Fuster
2003;():319-327. doi:10.1115/DETC2003/DAC-48735.

Recent improvements in vehicle propulsion systems, such as hybrid electric and fuel cells, demand new configuration solutions that may be totally different from conventional designs. Packaging of vehicle components is still a new area of research. This paper describes a configuration optimization method based on a multiple objective genetic algorithm. The method is applied to configuration optimization of a mid-size truck, in which two objectives are considered: ground clearance and dynamic behavior. A vehicle packaging model was developed using the commercial CAD software, ACIS, to analyze interference among vehicle components. An eight-degree of freedom model was used to analyze the dynamic behavior of a given configuration for the J-turn maneuver. Parallel computation technology was also incorporated to accelerate the optimization process. The applicability of this method is discussed and exemplified with the design of a mid-size truck for two propulsion systems: conventional diesel and hybrid diesel electric. A set of Pareto solutions is generated in which tradeoff decisions can be made to select a final design.

Commentary by Dr. Valentin Fuster
2003;():329-337. doi:10.1115/DETC2003/DAC-48736.

A new mathematical model for representing the geometric variations of lines is extended to include form and accumulation (stackup) of tolerances in an assembly. The model is compatible with the ASME/ANSI/ISO Standards for geometric tolerances. Central to the new model is a Tolerance-Map© , a hypothetical volume of points which corresponds to all possible locations and variations of a segment of a line (the axis) which can arise from tolerances on size, position, orientation, and form. Every Tolerance-Map is a convex set in a metric space. The new model makes stackup relations apparent in an assembly, and these can be used to allocate size and orientational tolerances; the same relations also can be used to identify sensitivities for these tolerances. All stackup relations can be met for 100% interchangeability or for a specified probability. Much of the detail in this paper would probably reside internally to software for designers, yet would not be included in the interface; its workings should be invisible to the user.

Commentary by Dr. Valentin Fuster
2003;():339-348. doi:10.1115/DETC2003/DAC-48737.

The paper aims at dimensioning a mechanism in order to make it robust, and synthesizing its dimensional tolerances. The design of a mechanism is supposed to be robust when its performance is as little as sensitive as possible to variations. First, a distinction is made between three sets to formulate a robust design problem; (i) the set of Design Variables (DV) whose nominal values can be selected between a range of upper and lower bounds, they are controllable; (ii) the set of Design Parameters (DP) that cannot be adjusted by the designer, they are uncontrollable; (iii) the set of performance functions. DV are however under uncontrollable variations although their nominal value can be adjusted. Moreover, two methods are described to solve robust design problems. The first method is explicit and solves problems that aim at minimizing variations in performance. The second method, an optimization problem, aims at optimizing the performance and minimizing its variations, but only when the ranges of variations in DV and DP are known. Besides, we define and compare some robustness indices. From the explicit method, we develop a new tolerance synthesis method. Finally, three examples are included to illustrate these methods: a damper, a two-dof and a three-dof serial positioning manipulator.

Topics: Design , Mechanisms
Commentary by Dr. Valentin Fuster
2003;():349-356. doi:10.1115/DETC2003/DAC-48738.

Knowledge attrition in the cutting tool industry can be mitigated and also the efficiency of the design process can be improved if past design solutions, which are the embodiment of rich expert knowledge, are made easily accessible to the designer for reuse. An effort is underway to realize this desirable situation at Widia Valenite, a company concerned with the design and manufacture of cutting tools. An ontology of cutting tool design exists, however this ontology is too detailed for designers who want quick access to past designs to satisfy new design problems. The potential of this ontology is only realized if a reuse view of the ontology is taken to retrieve design from a database of designs. This work presents the development and validation of the information requirements, in the form of descriptor terms, selected from the ontology for which designers make data entry that allows relevant past designs to be recalled. After these descriptor terms are identified, their success in recalling relevant past designs is demonstrated within a CBR application.

Commentary by Dr. Valentin Fuster
2003;():357-364. doi:10.1115/DETC2003/DAC-48739.

This paper presents a useful measuring method for rapidly verifying geometric errors using a double-readheads planar encoder system (DRPES). With the calculated model and the various specified test paths, the proposed measuring method provide the rapid performance, simplicity of setup, low cost and pre-process verification of CNC machine tools. Complete setups and procedures for geometric error verification are clearly presented. Experimental results showed that more information on analyzing the types of errors can be obtained and the performance of verification can be further improved. The proposed system completes the geometric verification of a CNC machine tool within one hour.

Commentary by Dr. Valentin Fuster
2003;():365-373. doi:10.1115/DETC2003/DAC-48740.

A design methodology is presented which decreases cycle time and opportunities for error through automated execution of a consistent design procedure. The Product Design Generator (PDG) methodology is useful for existing devices with a well-established design process. Two such examples are given, the Thermomechanical In-plane Microactuator (TIM) and the micro force gauge. In both PDGs, the designer inputs a finite set of requirements which automatically updates parametric design models. The necessary analyses are then executed, and product artifacts such as a CAD file, technical document, and test procedures are generated. The application of this method reduces the opportunities for error by ten times for the TIM PDG and five times for the micro force gauge PDG. The design cycle time is reduced from hours to minutes for both devices.

Commentary by Dr. Valentin Fuster
2003;():375-382. doi:10.1115/DETC2003/DAC-48741.

Knowledge-based design is a concept for the computer-aided provision and application of different representations of knowledge along the product development process. In this paper, a knowledge taxonomy is proposed, possible applications of knowledge-based design and the resulting benefits are discussed as well as open questions and research needs.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():383-390. doi:10.1115/DETC2003/DAC-48742.

It is necessary for product teams with diverse expertise to communicate during the product development process, notably during design reviews. As this expertise may be distributed across different geographic locations of an organization, design review teams are facing new challenges in effective communication. This paper presents the results of a controlled user study devised to examine the effectiveness of various communication methods for design reviews. Speech only, text only, and free communication methods were chosen to simulate current technologies commonly used in situations of geographic distribution. Primary results from the study include: group design reviews were approximately twice as effective as individual design reviews; free communication produced greater perceived effectiveness than speech only communication, speech only communication produced greater perceived effectiveness than text only communication; and certain personality factors, such as extroversion and intuition, may have contributed to higher productivity in design review teams.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():391-400. doi:10.1115/DETC2003/DAC-48743.

The development of on-board car safety systems requires an accidentology knowledge base for the development of new functionalities as well as their improvement and evaluation. The Knowledge Discovery in accident Database (KDD) is one of the approaches allowing the construction of this knowledge base. However, considering the complexity of the accident data and the variety of their sources (biomechanics, psychology, mechanics, ergonomics, etc.), the analytical methods of the KDD (clustering, classification, association rules etc.) should be combined with expert approaches. Indeed, there is background knowledge in accidentology which exists in the minds of accidentologist experts and which is not formalized in the accident database. The aim of this paper is to develop a Knowledge Representation Model (KRM) intended to incorporate this knowledge in the KDD process. The KRM is implemented in a knowledge-based system, which provides an expert classification of the attributes characterizing an accident. This expert classification provides an efficient tool for data preparation in a KDD process. Our method consists of combining the modeling systemic approach of complex systems and the modeling cognitive approach KOD (Knowledge Oriented Design) in knowledge engineering.

Topics: Safety , Design
Commentary by Dr. Valentin Fuster
2003;():401-407. doi:10.1115/DETC2003/DAC-48744.

Shape knowledge indexing is crucial in both design reuse and knowledge engineering, in which the pivot issue is to establish the unique representation of the invariant shape properties. Treating the shape of the region of interest as a surface signal, in this paper, a local shape-indexing scheme is developed by applying the affine invariant nature of the Fourier spectrum of the spatial shape distribution. The shape-coding scheme is theoretically proven being strictly invariant under affine transformations. A framework applying the invariant shape code in shape knowledge indexing is presented. Associated examples and the quantity analysis results are provided to justify the robustness, simplicity, and adaptability of the proposed shape knowledge-indexing scheme. Further, the proposed approach could be regarded as an alternative choice to represent local shape knowledge, especially for that of freeform features.

Commentary by Dr. Valentin Fuster
2003;():409-418. doi:10.1115/DETC2003/DAC-48745.

The reality of the Knowledge Management (KM) joins in a multiplicity of ends and situations. In the scientific literature, KM seems to appear as a sort of more or less unified and more or less generative “field of research” of specialists’ community. Nevertheless, a detailed analysis of the scientific production relative to the Knowledge Management shows essentially that the management of knowledge and competence became a preoccupation in a big part of sciences and techniques. This is translated by a big number of actors (university, consultant, industrial, etc.) constituting a community of preoccupations. It deals also with a profusion of publications, various networks and a rising offer of specialized trainings. However, the big variety of points of view and interpretations which join to the knowledge and competence management calls up to a lot of caution as for any other scientific discipline, and invites to understand the senses which are given to them. Indeed, no fundamental scientific result appeared really: literature supplies only approaches which hold more feeling than it is important, or very pragmatic applications, sending back mostly to particular cases of companies.

Commentary by Dr. Valentin Fuster
2003;():419-424. doi:10.1115/DETC2003/DAC-48746.

One of the objectives of concurrent engineering has been to integrate more and more knowledge as soon as possible during the product development process. In such a method and owing to designer creativity, new design solutions are carried out. Design alternatives then appear; the differences can be relative to the functions, the technology, the materials or the manufacturing process as well. This paper presents some first specifications in modelling those design alternatives. Much more design solutions can then be kept in designers’ mind instead of focusing on the a priori best solution. The final solution is therefore chosen depending on every point of view involved in the design process. First of all this work aimed at defining the new knowledge that have to be taken into account in the product modelling in order to support the design alternatives management. Those new model elements must currently be integrated or linked with already-known product and design process models. The alternatives modelling has secondly been tested on very simple design examples. Afterwards, the design of a surgical simulator has been carried out that pointed out the real benefits and the feasibility of the alternatives modelling.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():425-438. doi:10.1115/DETC2003/DAC-48747.

Engineering design is essentially a collaborative decision-making process that requires rigorous evaluation, comparison and selection of design alternatives and optimization from a global perspective on the basis of different classes of design criteria. Increasing design knowledge and supporting designers to make right and intelligent decisions can achieve the improvement of the design and design efficiency. This paper develops a knowledge-based decision support model and framework that can be extensively applied for an engineering system, which allows for the seamless / smooth integration of collaborative product development with optimal product performance. The developed hybrid robust design decision support model quantitatively incorporates qualitative design knowledge and preferences of multiple, conflicting attributes stored in a knowledge repository so that a better understanding of the consequences of design decisions can be achieved from an overall perspective. Two new concepts and mechanisms, transforming bridge and regulatory switch, are introduced in integration of decision support models. The results of this work provide a framework for an efficient decision support environment involving distributed resources to shorten the realization of products with optimal life-cycle performance and competitiveness. The developed methodology and framework are generic and flexible enough to be used in a variety of decision problems. Case application and studies for concept evaluation and selection in design for mass customization are provided.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():439-447. doi:10.1115/DETC2003/DAC-48749.

Engineering changes are inevitable in a product development life cycle. The requests for engineering changes can be due to new customer requirements, emergence of new technology, market feedback, or variations of components and raw materials. Each change generates a level of impact on costs, time to market, tasks and schedules of related processes, and product components. Change management tools available today focus on the management of document and process changes. Assessments of change impact are typically based on the “rule of thumb”. Our research has developed a methodology and related techniques to quantify and analyze the impact of engineering changes to enable faster and more accurate decision-making in engineering change management. Reported in this paper are investigations of industrial requirements and fundamental issues of change impact analysis as well as related research and techniques. A framework for a knowledge-supported change impact analysis system is proposed. Three critical issues of system implementation, namely integrated design information model, change plan generator and impact estimation algorithms, are addressed. Finally the benefits and future work are discussed.

Commentary by Dr. Valentin Fuster
2003;():449-457. doi:10.1115/DETC2003/DAC-48750.

The primary obstacle in automated design for crashworthiness is the heavy computational resources required during the optimization processes. Hence it is desirable to develop efficient optimization algorithms capable of finding good solutions without requiring too many model simulations. This paper presents an efficient mixed discrete and continuous optimization algorithm, Mixed Reactive Taboo Search (MRTS), and its application to the design of a vehicle B-Pillar subjected to roof crush conditions. The problem is sophisticated enough to explore the MRTS’ capability of identifying multiple local optima with a single optimization run, yet the associated finite element model (FEM) is not too large to make the computational resources required for global optimization prohibitive. The optimization results demonstrated that a single run of MRTS identified a set of better designs with smaller number of simulation runs, than multiple runs of Sequential Quadratic Programming (SQP) with several starting points.

Commentary by Dr. Valentin Fuster
2003;():459-472. doi:10.1115/DETC2003/DAC-48751.

Passenger vehicle crashworthiness is one of the essential vehicle attributes. According to National Highway Traffic Safety Administration (NHTSA), there were over six million vehicle crashes in the United States in the year 2000, which claimed the lives of more than forty thousand persons. Vehicle crashworthiness is difficult to satisfy in a manner appeasing to other design decisions about the vehicle. This paper aims at developing a novel methodology for crashworthiness optimization of vehicle structures. Based on observations of the manner of structural deformation, the authors propose the abstraction of the actual vehicle structure, which is to be represented as a linkage mechanism having special nonlinear springs at the joints. The special springs are chosen to allow the motion of the mechanism to capture the overall motion of the actual vehicle structure. It thus becomes possible to optimize the mechanism, which is an easier task than directly optimizing the vehicle structure. A realization of the optimized mechanism is then performed to obtain an equivalent structure, and then direct optimization of the realized structure is performed for further tuning. The study presented shows the success of the proposed approach in finding better designs than direct optimization while using comparatively less computational resources.

Commentary by Dr. Valentin Fuster
2003;():473-479. doi:10.1115/DETC2003/DAC-48752.

This paper presents a method where a multi objective optimization technique is used together with response surface methods in order to support crashworthiness design. As in most engineering design problems there are several conflicting objectives that have to be considered when formulating a design problem as an optimization problem. Here this is exemplified by the desire to minimize the intrusion into the passenger compartment area and simultaneously obtain low maximum acceleration during vehicle impact. These two objectives are naturally conflicting, since low maximum acceleration implies large intrusion. The contribution of this paper is to show a successful application of a set of existing methods to solve a real world engineering problem. The paper also presents methods of illustrating the results obtained from the multi-objective optimization.

Commentary by Dr. Valentin Fuster
2003;():481-493. doi:10.1115/DETC2003/DAC-48753.

A design sensitivity analysis of high frequency structural-acoustic problem is formulated and presented. The Energy Finite Element Method (EFEM) is used to predict the structural-acoustic responses in high frequency range, where the coupling between the structural and acoustic domain are modeled by using radiation efficiency. The continuum design sensitivity formulation is derived from the governing equation of EFEM and the discrete method is applied in the variation of the structural-acoustic coupling matrix. The direct differentiation and adjoint variable method are both developed for the sensitivity analysis, where the difficulty of the adjoint variable method is overcome by solving a transposed system equation. Parametric design variables such as panel thickness and material damping are considered for sensitivity analysis, and the numerical sensitivity results show excellent agreement comparing with the finite difference results.

Commentary by Dr. Valentin Fuster
2003;():495-502. doi:10.1115/DETC2003/DAC-48754.

The effect of tempering (artificial aging) on the axial crush strength of four cell extruded rectangular aluminum alloy (AA) 6061 and 6063 tubes is developed in this report. Increasing the aging time from the press quenched condition increases the flow strength of the material and also increases the axial energy absorbed up to a point when the material showed fracture. Good predictions were made between experimental, theoretical, and numerical mean crush loads for various tempers of AA6061 and AA6063, except in the cases where a large amount of fracture was present. A recommended temper time of three hours was obtained for AA6063, with an increase in mean crush load of 60%. A recommended temper time of six hours was obtained for AA6061 with an increase in mean crush load of 40%. These results will be useful in future aluminum automotive body projects, both for their predictive capability, and for the temper recommendations.

Commentary by Dr. Valentin Fuster
2003;():503-512. doi:10.1115/DETC2003/DAC-48755.

Direct application of most optimization techniques, especially Multi-Objective Evolutionary Algorithms (MOEAs) that require many response evaluations, is computationally prohibitive for most real-world engineering simulations. In this paper, an approximation-assisted approach to multi-objective optimization of computationally expensive response functions is presented. We employ a Bayesian approach, referred to as Sequential MAXimum Entropy Design (SMAXED), for design of experiments and global approximation of an expensive finite-element model, i.e., crash event simulation of front end of a pick-up truck. The approximation model is optimized using a multi-objective genetic algorithm. It is shown that while the approach dramatically reduces the computational costs, it also finds a good estimate to the Pareto-optimal solution set for such a complex problem.

Commentary by Dr. Valentin Fuster
2003;():513-518. doi:10.1115/DETC2003/DAC-48756.

In machine tool development, control software engineering has reached a cost proportion of over fifty percent of the total development costs. Highly customized user requirements and the compulsion to shorten development cycles accompany the need to master risen quality requirements of the mechatronic product machine tool. This enforces a strategy change from prevailing sequential engineering to concurrent engineering. The paper proposes a Hardware-in-the-Loop simulation environment as an interdisciplinary discussion platform to virtually implement, evaluate and optimize a machine tool throughout all stages of development.

Topics: Machine tools
Commentary by Dr. Valentin Fuster
2003;():519-525. doi:10.1115/DETC2003/DAC-48757.

This paper describes the framework of nonlinear finite element model validation for vehicle crash simulation. Several methods were developed to quantify transient time-domain data (functional data). The concept of correlation index was proposed to determine the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. The methodologies developed in this paper can also be used for CAE model updating, parameter tuning, and model calibration.

Commentary by Dr. Valentin Fuster
2003;():527-533. doi:10.1115/DETC2003/DAC-48758.

Kriging is a popular metamodeling technique for analysis of computer experiment. However, the likelihood function near the optimum is flat in some situations, and this may lead to very large random variation in the maximum likelihood estimate. To overcome this difficulty, a penalized likelihood approach is proposed for the kriging model. The proposed method is particularly important in the context of a computationally intensive simulation model where the number of simulation runs must be kept small. We demonstrate the proposed approach for the reduction of piston slap, an unwanted engine noise due to piston secondary motion. Issues related to practical implementation of the proposed approach are discussed.

Topics: Computers
Commentary by Dr. Valentin Fuster
2003;():535-543. doi:10.1115/DETC2003/DAC-48759.

A variety of metamodeling techniques have been developed in the past decade to reduce the computational expense of computer-based analysis and simulation codes. Metamodeling is the process of building a “model of a model” that provides a fast surrogate for a computationally expensive computer code. Common metamodeling techniques include response surface methodology, kriging, radial basis functions, and multivariate adaptive regression splines. In this paper, we present Support Vector Regression (SVR) as an alternative technique for approximating complex engineering analyses. The computationally efficient theory behind SVR is presented, and SVR approximations are compared against the aforementioned four metamodeling techniques using a testbed of 22 engineering analysis functions. SVR achieves more accurate and more robust function approximations than these four metamodeling techniques and shows great promise for future metamodeling applications.

Commentary by Dr. Valentin Fuster
2003;():545-554. doi:10.1115/DETC2003/DAC-48760.

Metamodeling approach has been widely used due to the high computational cost of using high-fidelity simulations in engineering design. The accuracy of metamodels is directly related to the experimental designs used. Optimal experimental designs have been shown to have good “space filling” and projective properties. However, the high cost in constructing them limits their use. In this paper, a new algorithm for constructing optimal experimental designs is developed. There are two major developments involved in this work. One is on developing an efficient global optimal search algorithm, named as enhanced stochastic evolutionary (ESE) algorithm. The other is on developing efficient algorithms for evaluating optimality criteria. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time, the number of exchanges needed for generating new designs, and the achieved optimality criteria. The algorithm is also very flexible to construct various classes of optimal designs to retain certain structural properties.

Commentary by Dr. Valentin Fuster
2003;():555-565. doi:10.1115/DETC2003/DAC-48761.

Modern engineering design problems often involve computation-intensive analysis and simulation processes. Design optimization based on such processes is desired to be efficient, informative, and transparent. This work proposes a rough set based approach that can identify multiple subregions in a design space, within which all of the design points are expected to have a performance value equal to or less than a given level. The rough set method is applied iteratively on a growing sample set. A novel termination criterion is also developed to ensure a modest number of total expensive function evaluations to identify these sub-regions and search for the global optimum. The significances of the proposed method are two folds. First, it provides an intuitive method to establish the mapping from the performance space to the design space; given a performance level, its corresponding design region(s) can be identified. Such a mapping can be used to explore and visualize the entire design space. Second, it can be naturally extended to a global optimization method. It also bears potentials for more abroad applications to problems such as robust design optimization. The proposed method was tested with a number of test problems.

Commentary by Dr. Valentin Fuster
2003;():567-576. doi:10.1115/DETC2003/DAC-48762.

The use of kriging models for approximation and global optimization has been steadily on the rise in the past decade. The standard approach used in the Design and Analysis of Computer Experiments (DACE) is to use an Ordinary kriging model to approximate a deterministic computer model. Universal and Detrended kriging are two alternative types of kriging models. In this paper, a description on the basics of kriging is given, highlighting the similarities and differences between these three different types of kriging models and the underlying assumptions behind each. A comparative study on the use of three different types of kriging models is then presented using six test problems. The methods of Maximum Likelihood Estimation (MLE) and Cross-Validation (CV) for model parameter estimation are compared for the three kriging model types. A one-dimension problem is first used to visualize the differences between the different models. In order to show applications in higher dimensions, four two-dimension and a 5-dimension problem are also given.

Topics: Computers
Commentary by Dr. Valentin Fuster
2003;():577-586. doi:10.1115/DETC2003/DAC-48763.

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.

Commentary by Dr. Valentin Fuster
2003;():587-595. doi:10.1115/DETC2003/DAC-48764.

Data clustering methods can be a useful tool for engineering design that is based on numerical optimization. The clustering method is an effective way of producing representative designs, or clusters, from a large set of potential designs. These methods have recently been applied to the clustering of Pareto-optimal solutions from multi-objective optimization. The results presented here focus on the application of clustering to single objective optimization results. In the case of single objective optimization, the method is used to determine the clusters in a set of quasi-optimal feasible solutions generated by an optimizer. A data clustering procedure based on an evolutionary method is briefly described. The number of clusters is determined automatically and need not be known a priori. The method is demonstrated by application to the results of a turbine blade coolant passage shape optimization problem. The solutions are transformed to a lower-dimensional space for better understanding of their variance and character. Engineering information, such as the shapes and locations of the internal passages, is supported by the visualization of clustered solutions. The clustering, transformation, and visualization methods presented in this study might be applicable to the increasing interpretation demands of design optimization.

Commentary by Dr. Valentin Fuster
2003;():597-603. doi:10.1115/DETC2003/DAC-48765.

We propose an optimization method for a semi-active shock absorber for use in aircraft landing gear, in order to handle variations in the maximum vertical acceleration of an aircraft during landing caused by the variation of the aircraft mass due to the variations in the number of passengers, and the amounts of cargo and fuel. In this optimization, the maximum vertical acceleration of an aircraft is set as an objective function to be minimized. Design variables searched in the first step of this optimization are discrete orifice areas formed by the outer surface of a hollow metering pin and a hole in the semi-active shock absorber. The design variable searched in the second step is a compensating orifice area which is controlled based on the mass variation. Using the optimum target orifice area obtained in the second step, we optimally determine a practical orifice area that is controlled by a stepping motor. The optimizations for a passive shock absorber and for semi-active shock absorbers with target and practical orifice areas indicate that the semi-active shock absorbers can handle aircraft mass variation much better than the optimum passive shock absorber. Furthermore, the robustness of the optimum practical orifice area controlled by a stepping motor is shown via simulation.

Commentary by Dr. Valentin Fuster
2003;():605-614. doi:10.1115/DETC2003/DAC-48766.

Constraint Programming (CP) is a promising technique for managing uncertainty in conceptual design. It provides efficient algorithms for reducing, as quickly as possible, the domains of the design and performance variables while complying to the engineering and performance constraints linking them. In addition, CP techniques are suitable to graphically represent 3D projections of the complete design space. This is a useful capability for a better understanding of the product concept’s degrees of freedom and a valuable alternative to optimization based upon the construction of an arbitrary preference aggregation function. Unfortunately, one of the main impediments for using Constraint Programming on industrial problems of practical interest is that constraints must be represented by analytical equations, which is not the case of hard mechanical performances — such as meshing and finite element computations — that are usually obtained after lengthy simulations. We propose to use metamodeling techniques (MM) to generate approximated mathematical models of these analyses which can be employed directly within a CP environment, expanding the scope of CP to applications that previously could not be solved by CP due to the unavailability of analytical equations. We show that there is a tradeoff between the metamodel fidelity and the resulting CP constraint tractability. A strategy to find this compromise is presented. The case study of a combustion chamber design shows amazingly that the compromise is to favor the simplest and the coarsest first-order response surface model.

Commentary by Dr. Valentin Fuster
2003;():615-624. doi:10.1115/DETC2003/DAC-48767.

A methodology is presented for studying the effects of automobile emission policies on the design decisions of profit-seeking automobile producers in a free-entry oligopoly market. The study does not attempt to model short-term decisions of specific producers. Instead, mathematical models of engineering performance, consumer demand, cost, and competition are integrated to predict the effects of design decisions on manufacturing cost, demand, and producer profit. Game theory is then used to predict vehicle designs that producers would have economic incentive to produce at market equilibrium under several policy scenarios. The methodology is illustrated with three policy alternatives for the small car market: corporate average fuel economy (CAFE) regulations, carbon dioxide emissions taxes, and diesel fuel vehicle quotas. Interesting results are derived, for example, it is predicted that in some cases a stiffer regulatory penalty can result in lower producer costs because of competition. This mathematical formulation establishes a link between engineering design, business, and marketing through an integrated optimization model that is used to provide insight necessary to make informed environmental policy.

Commentary by Dr. Valentin Fuster
2003;():625-632. doi:10.1115/DETC2003/DAC-48768.

This paper presents a new model for structural topology optimization. We represent the structural boundary by a level set model that is embedded in a scalar function of a higher dimension. Such level set models are flexible in handling complex topological changes and are concise in describing the boundary shape of the structure. Furthermore, a gradient-based procedure leads to a numerical algorithm for the optimum solution satisfying specified constraints. The result is a 3D topology optimization technique with outstanding flexibility of handling topological changes, without resorting to homogenization-based relaxations that are widely used in the recent literature.

Commentary by Dr. Valentin Fuster
2003;():633-639. doi:10.1115/DETC2003/DAC-48769.

The standard problem of finding the optimal layout of structural material associated with maximum stiffness is expanded to include consideration of thermal criteria. The problem is posed as a three-phase layout problem where the phases include an insulating or fire retardant material and an unknown distribution of heat sources, in addition to the structural material. The model used is simple, yet results suggest that the introduction of measures to control the temperature in the structure when subjected to significant heat transfer rates can result in layouts that differ substantially from solutions where thermal issues are ignored.

Topics: Heat , Optimization , Topology
Commentary by Dr. Valentin Fuster
2003;():641-648. doi:10.1115/DETC2003/DAC-48770.

The present work introduces a new methodology for solving the topology optimization problem of a compliant gripper. A hybrid optimization technique is developed using simulated annealing as a random search method, while the simplex method (Nelder-Mead) is used as a direct search method. A new modified technique of motion from one search point to another based on the discrete nature of adding and/or removing a structural member is proposed. The traditional continuous simulated annealing technique is used to find the members’ heights. A discrete uni-variant search method is adopted following the simulated annealing and before the simplex method. This corresponds to about 14% of the number used in the old method and in the previous work in the literature, and about 86% of the optimization time is saved. The optimum design of a compliant mechanism is conducted for maximum flexibility and stiffness using the developed hybrid optimization technique.

Commentary by Dr. Valentin Fuster
2003;():649-658. doi:10.1115/DETC2003/DAC-48771.

This study utilizes the topology optimization with the finite element method and the simulated annealing algorithms to optimize the structure and the dynamic performance of a bike frame with dampers. Design domains, loadings and boundary conditions of bike frames are defined. Joint locations of a damper with the front and the rear frames and the joint location of the front and the rear frames are considered as design variables. The transient response and the acceleration of a bike on a sinusoid curved road surface are analyzed. Effects of the joint locations and the stiffness of frames on the dynamic performance are studied. The structural topology of frames and joint locations of a bike are recommended to improve the dynamic performance.

Commentary by Dr. Valentin Fuster
2003;():659-671. doi:10.1115/DETC2003/DAC-48772.

A robust topology exploration method is under development in which robust design techniques are extended to the early stages of a design process when a product’s layout or topology is determined. The performance of many designs is strongly influenced by both topology, or the geometric arrangement and connectivity of a design, and potential variations in factors such as the operating environment, the manufacturing process, and specifications of the design itself. While topology design and robust design are active research areas, little attention has been devoted to integrating the two categories of design methods. In this paper, we move toward a comprehensive robust topology exploration method by coupling robust design methods, namely, design capability indices with topology design techniques. The resulting design method facilitates efficient, effective realization of robust designs with complex topologies. The method is employed to design extruded cellular materials with robust, desirable elastic properties. For this class of materials, 2D cellular topologies are customizable and largely govern multifunctional performance. By employing robust, topological design methods, we obtain cellular material designs that are characterized by ranged sets of design specifications with topologies that reliably meet a set of design requirements and are relatively simple and robust to anticipated variability.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():673-682. doi:10.1115/DETC2003/DAC-48773.

Computer Aided Engineering (CAE) has been successfully utilized in mechanical industries, but few mechanical design engineers use CAE tools that include structural optimization, since the development of such tools has been based on continuum mechanics that limit the provision of useful design suggestions at the initial design phase. In order to mitigate this problem, a new type of CAE based on classical structural mechanics, First Order Analysis (FOA), has been proposed. This paper presents the outcome of research concerning the development of a structural topology optimization methodology within FOA. This optimization method is constructed based on discrete and function-oriented elements such as beam and panel elements, and sequential convex programming. In addition, examples are provided to show the utility of the methodology presented here for mechanical design engineers.

Commentary by Dr. Valentin Fuster
2003;():683-691. doi:10.1115/DETC2003/DAC-48774.

To accommodate the dual objectives of many engineering applications, one to minimize the mean compliance for the stiffest structure under normal service condition and the other to maximize the strain energy for energy absorption during excessive loadings, topology optimization with a multi-material model is applied to the design of energy absorbing structure in this paper. The effective properties of the three-phase material are derived using a spherical micro-inclusion model. The dual objectives are combined in a ratio formation. Numerical examples from the proposed method are presented and discussed.

Commentary by Dr. Valentin Fuster
2003;():693-702. doi:10.1115/DETC2003/DAC-48775.

The distributed compliance and smooth deformation field of compliant mechanisms provide a viable means to achieve shape morphing in many systems, such as flexible antenna reflectors and morphing aircraft wings. We previously developed a systematic synthesis approach to design shape morphing compliant mechanisms using Genetic Algorithm (GA). However, the design variable definition, in fact, allows the generation of invalid designs (disconnected structures) within the GA. In this research, we developed a load path representation to include the structure connectivity information into the design variables, thus improving the GA efficiency. The number of design variables is also independent of the number of elements in the finite element model that is used to solve for the structural deformation. The shape morphing synthesis approach, incorporating this path representation, is demonstrated through two examples, followed by discussions on further refinements.

Commentary by Dr. Valentin Fuster
2003;():703-710. doi:10.1115/DETC2003/DAC-48776.

Finite element analysis has become a key technology for a design process of manufacturing industry. A hexahedral mesh is focused, because using a hexahedral mesh increases the quality of analysis. However it is very difficult problem to generate high quality hexahedral meshes, and there are many challenging research topics. Our goal is to develop a method to generate hexahedral meshes automatically to general volumes. Our method uses an intermediate model to recognize the input volume. The intermediate model is defined in the integer 3-dimensional space, and faces of the intermediate model are vertical to coordinate axes. Hexahedral mesh is generated by dividing the intermediate model into integer grids, and blocks of grids are projected into original volume. In this paper, we describe the method to generate a topology of the intermediate model. We use face clustering technique to generate the topology of the intermediate model. The faces of the input volume are clustered into 6 types; according to 3 coordinate axes and its direction, and clustered faces will be the faces of the intermediate model.

Commentary by Dr. Valentin Fuster
2003;():711-720. doi:10.1115/DETC2003/DAC-48777.

Even if researches on free-form surface deformation have produced a lot of various methods, very few of them are able to really control the shape in an adequately interactive way and most of them propose a unique solution to the underconstrained system of equations coming out of their deformation models. In our approach, where the deformation is performed through the static equilibrium modification of a bar network coupled to the surface control polyhedron, different minimizations have been proposed to overcome these limits and form a set of representative parameters that can be used to give access to the desired shape. In this paper, a reformulation of the optimization problem is presented thus enabling the generation of new shapes based on a common set of minimization criteria. Such a modification widens the variety of shapes still verifying the same set of constraints. When generalizing some of these minimizations the user has access to a continuous set of shapes while acting on a single parameter. Taking advantage of the reformulation, anisotropic surface behaviors are considered too and briefly illustrated. In addition, the possibility of defining several minimizations on different areas of a surface is sketched and aims at giving the user more freedom in local shape definition. The whole minimizations proposed are illustrated through examples resulting from our surface deformation software.

Commentary by Dr. Valentin Fuster
2003;():721-735. doi:10.1115/DETC2003/DAC-48778.

This paper presents a method for removing geometric noise from triangulated meshes while preserving edges and other intended geometric features. The method iteratively updates the position of each mesh vertex with a position lying on a locally fitted bivariate polynomial. The user selects the width of the vertex neighborhood, the order of the fitted polynomial, and a threshold angle to control the effects of the smoothing operation. To avoid smoothing over discontinuities, the neighborhood can be eroded by removing vertices with normals that deviate beyond a threshold from the estimated median normal of the neighborhood. The method is particularly suitable for use on laser scanner generated meshes of automobile outer body panels. Smoothing methods used on these meshes must allow C2 continuous equilibrium surfaces and must minimize shrinkage. Despite the abundance of existing smoothing schemes, none addresses both of these specific problems. This paper demonstrates the effectiveness of our method with both synthetic and real world examples.

Commentary by Dr. Valentin Fuster
2003;():737-744. doi:10.1115/DETC2003/DAC-48779.

Occlusion detection is a fundamental and important problem in optical sensor inspection planning. Many view-planning algorithms have been developed for optical inspection, however, few of them explicitly develop practical algorithms for occlusion detection. This paper presents a hierarchical space partition approach that divides both positional and surface normal space of an object for fast occlusion detection. A k-d tree is used to represent this partition. A novel concept of δ – occlusion is introduced to detect occlusion for objects in an un-organized point cloud representation. Based on the δ – occlusion concept, several propositions regarding to a range search on a k-d tree have been developed for occlusion detection. Implementation of this approach demonstrated that significant time can be saved for occlusion detection using the partition of both positional and surface normal space.

Commentary by Dr. Valentin Fuster
2003;():745-754. doi:10.1115/DETC2003/DAC-48780.

In a multi-axis hybrid manufacturing system, it is necessary to utilize a machining process to improve surface accuracy and guarantee overall geometry after the deposition process. Due to the complexity of the multi-axis system, it is necessary to find proper orientations of cutting tools for the CNC machine to finish surface machining. This paper presents an algorithm to find collision-free surface machining toolpath for a given workpiece. The concept of the 2-D visibility map and its properties are discussed. The algorithm to compute the 2-D visibility map is presented. With the help of the 2-D visibility map, an optimal a collision free tool approaching direction can be easily decided. Also the type of the surface machining toolpath for different types of surfaces is decided based on topological information and the machining toolpath (CL data for milling tool). The developed planning scheme has been tested via machine simulations and has shown that it can be effectively applied to cutter-path generation for multi-axis surface machining.

Commentary by Dr. Valentin Fuster
2003;():755-764. doi:10.1115/DETC2003/DAC-48781.

A taxonomy that classifies issues affecting the collaborative design process is proposed. These factors, which may inhibit or facilitate the progress or success of a design team, provide a description of collaborative design situations. The taxonomy includes top-level attributes of team composition, communication, distribution, design approach, information, and nature of the problem. An example collaborative design situation is used to illustrate the application of the taxonomy. This taxonomy is an initial step towards the creation of new collaborative support agent-based tools structured upon a fundamental understanding of the collaborative process with a theoretical foundation.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():765-774. doi:10.1115/DETC2003/DAC-48782.

The decomposition and coordination of decisions in the design of complex engineering systems is a great challenge. Companies who design these systems routinely allocate design responsibility of the various subsystems and components to different people, teams or even suppliers. The mechanisms behind this network of decentralized design decisions create difficult management and coordination issues. However, developing efficient design processes is paramount, especially with market pressures and customer expectations. Standard techniques to modeling and solving decentralized design problems typically fail to understand the underlying dynamics of the decentralized processes and therefore result in suboptimal solutions. This paper aims to model and understand the mechanisms and dynamics behind a decentralized set of decisions within a complex design process. By using concepts from the fields of mathematics and economics, including Game Theory and the Cobweb Model, we model a simple decentralized design problem and provide efficient solutions. This new approach uses numerical series and linear algebra as tools to determine conditions for convergence of such decentralized design problems. The goal of this paper is to establish the first steps towards understanding the mechanisms of decentralized decision processes. This includes two major steps: studying the convergence characteristics, and finding the final equilibrium solution of a decentralized problem. Illustrations of the developments are provided in the form of two decentralized design problems with different underlying behavior.

Topics: Design
Commentary by Dr. Valentin Fuster
2003;():775-784. doi:10.1115/DETC2003/DAC-48783.

In Computer-Aided Design, when creating the solid model of a part, the designer knows how the part will interface with other parts, however this information is not stored with the part model. For catalog parts, it would be useful to be able to embed this assembly information into the part model in order to automate the process of applying mating constraints, to reduce the assembly designer’s effort or to allow for automated exploration of alternative configurations. This research evaluates and compares different schemes for capturing the attributes of assembly interfaces and appending that information to solid models. The schemes studied involve (i) different combinations of ways to constitute ports and include labeling, (ii) different bases for determining port compatibility with respect to design intent, and (iii) different ways of evaluating connectability with respect to part geometry. The scheme that we conclude is best minimizes the number of ways the system will try to put parts together at the expense of effort from the solid model designer to provide more information.

Topics: Manufacturing , Design
Commentary by Dr. Valentin Fuster
2003;():785-793. doi:10.1115/DETC2003/DAC-48784.

As CAE (Computer Aided Engineering) applications become increasingly precise, the knowledge and technical skill required to operate such applications has become more highly specialized. However, such tools have not been utilized in the initial design process of mechanical products, where designers cannot construct detailed analytical models. This paper proposes a cross-sectional shape design optimization system that supports the initial design process for bar structures. The cross-sectional design problem is formulated as an eight-objective optimization problem that can be solved using genetic algorithms. A method for generating cross-sectional shapes satisfying designer-required characteristics is also proposed. These methods, which reduce the number of trial and error processes and product design failures, are expected to enable shortened product development lead-times.

Commentary by Dr. Valentin Fuster
2003;():795-804. doi:10.1115/DETC2003/DAC-48785.

We have developed a data visualization interface that facilitates a design by shopping paradigm, allowing a decision-maker to form a preference by viewing a rich set of good designs and use this preference to choose an optimal design. Design automation has allowed us to implement this paradigm, since a large number of designs can be synthesized in a short period of time. The interface allows users to visualize complex design spaces by using multi-dimensional visualization techniques that include customizable glyph plots, parallel coordinates, linked views, brushing, and histograms. As is common with data mining tools, the user can specify upper and lower bounds on the design space variables, assign variables to glyph axes and parallel coordinate plots, and dynamically brush variables. Additionally, preference shading for visualizing a user’s preference structure and algorithms for visualizing the Pareto frontier have been incorporated into the interface to help shape a decision-maker’s preference. Use of the interface is demonstrated using a satellite design example by highlighting different preference structures and resulting Pareto frontiers. The capabilities of the design by shopping interface were driven by real industrial customer needs, and the interface was demonstrated at a spacecraft design conducted by a team at Lockheed Martin, consisting of Mars spacecraft design experts.

Topics: Design , Visualization
Commentary by Dr. Valentin Fuster
2003;():805-814. doi:10.1115/DETC2003/DAC-48786.

This paper presents a framework for representing and deploying error-proofs (poka-yoke) in the product development process. Information technology (IT) already plays a key role in product development through tools such as numerical computation, CAD, simulations, and process planning. Information management for error-proofing in manufacturing is also quite common in many industries. However, experts agree that many field failures and quality problems stem back to errors in engineering design. While there are many case studies on design process error-proofing, one must deploy them through leveraging engineering information systems for them to be effective. Towards this goal, this paper proposes the use of quality function deployment (QFD) to characterize potential design errors, evaluate the risks, identify effective error proofing elements, and prioritize their implementation.

Topics: Design , Errors
Commentary by Dr. Valentin Fuster
2003;():815-821. doi:10.1115/DETC2003/DAC-48787.

A microfactory is a system that can perform manufacturing processes within a very limited space such as a desktop. However, design optimization of miniature machine tools in microfactories have not been studied enough. Since the miniature machine tool designs are not supported by existing design experience as normal machine tools are, design guidelines for miniature machine tool are strongly demanded. And a design tool to analyze machine performance without prototyping will be also necessary because the miniature machines have wider design choices than normal machine tools have, based on its small size and less constraints. This paper focuses on a robust design tool combining form-shaping theory with the Taguchi method, to roughly estimate performance of miniature machine tools at its conceptual design stages. The effort not only identifies critical design parameters that have significant influence on the machining tolerance, but also determines which structure has the best theoretical performance. The paper tells that the proposing design evaluation method can help machine tool designers in determining the optimum structure of a miniature machine tool. The study also realizes two designs of miniature mills to measure positioning errors. The measurement ensures the design evaluation method can predict the machine performance well enough for usage in conceptual design stages. The paper concludes that the design evaluation method is applicable to a systematic miniaturization of a machine tool.

Commentary by Dr. Valentin Fuster
2003;():823-832. doi:10.1115/DETC2003/DAC-48788.

The embodiment design stage involves determination of geometric sizes, key parameter values, and matching of component variables to system requirements. This embodiment design stage can be parametrically represented as an iterative design-redesign problem. This paper presents a domain independent characterization of such problems; the characterization includes problem definition, design relations/procedures, and measures of goodness. The paper also discusses representation issues and solution techniques for design-redesign problems. Design tasks are differentiated as domain independent or problem specific and the scope of each design task with respect to the characterization is delineated. A Design Shell implemented on the basis of this characterization is described. This shell can be configured for evaluating designs in any domain. A case study illustrates the use of this Design Shell in characterizing a specific design problem and exploring its design space.

Commentary by Dr. Valentin Fuster
2003;():833-841. doi:10.1115/DETC2003/DAC-48789.

Mere collection of failure information and accident records do not effectively relay the knowledge associated with the case to the reader. We propose to collect data in a structured manner so the message is better transferred to the information receiver. We further developed a scheme that records the essence of each failure case in a sequence of predefined phrases displayed to the recorder in a hierarchy of phrases. We call the sequence the “scenario” of the event. Arranging the phrases in descending steps and supplementing it with an illustration and a key knowledge sentence composes the visual summary of the case. A glance at the visual summary and reading the scenario steps generate a good image of the case in the receiver’s mind. Among the predefined hierarchical phrases, we call those that express the cause of the event, failure cause phrases. Recording high level failure cause phrases from the hierarchy forces the event recorder to evaluate the root cause of the failure. To the top and second level (in the phrase hierarchy) failure cause phrases, we assigned 5-space vector components to characterize each phrase in terms of “knowledge”, “carefulness”, “judgment”, “organization”, and “nature”. This vector characterization of the failure cause phrases with the scenario allows us to further characterize each failure case as a linear combination of the predefined phrases. Once each failure case has its vector characterization, we can evaluate its similarity with other cases. Also, if we find the characterization of an individual, group, or organization in the same 5-space, we can warn about failure cases with similar characteristics that are likely to happen. The method is powerful in predicting failures so they can be avoided before happening.

Topics: Failure
Commentary by Dr. Valentin Fuster
2003;():843-852. doi:10.1115/DETC2003/DAC-48790.

This paper presents a design methodology for the thermal design and packaging of hybrid electronic-mechanical products. In this work, tight integration between ECAD and MCAD was achieved through the use of a web-based tool used in managing the concurrent designs, called the Domain Unified CAD Environment (DUCADE). This work also reduced the amount of time required for thermal simulation by using a web-based Design of Experiment Testbed (DOET) to systematically determine effects of varying system parameters before full-scale computational fluid dynamics (CFD) thermal modeling was performed. The design process began by proper selection of material, manufacturing process and cooling methods, based on electrical and integrated circuit design. DUCADE was then set up to monitor couplings between the various domains. This was followed by computer-aided-design and computer-aided-engineering of the mechanical package. In computer-aided-engineering, DOET was first used to determine variables that had significant effect on the thermal system response. Detailed CFD thermal simulations were then carried out in FLOTHERM only focusing on variables that the DOET determined to have strong effect. Rapid prototypes were fabricated to refine the design before final production. Each step of the cycle was tested and demonstrated through a case study on the design of the Berkeley Emulation Engine (BEE) which involved multi-disciplinary electrical, mechanical, and thermal design.

Commentary by Dr. Valentin Fuster
2003;():853-860. doi:10.1115/DETC2003/DAC-48792.

A practical modeling method for predicting vibration characteristics of turbine generator stator frames was developed. The structural parts that compose a stator frame were categorized into three groups: parts that affect ring-mode vibration as mass, parts that affect it as stiffness, and parts that affect it as both. A proper boundary condition, the value of a modal damping ratio, and an accurate representation of exciting forces were examined. The modeling method was then applied to another turbine generator. It was found that the predicted natural frequency agrees with measured one within only a 3% error. Based on the modeling method, a vibration analysis system for design was developed.

Commentary by Dr. Valentin Fuster
2003;():861-865. doi:10.1115/DETC2003/DAC-48793.

Poor interfacial properties between reinforcement fibers and a Polymethylmethacrylate (PMMA) matrix may result in debonding between them, which is a major failure mechanism for fiber reinforced bone cement. Optimization of the shape of the fibers can improve load transfer between the fibers and PMMA matrix, thereby providing maximum overall strength performance. This paper presents a procedure for structural shape optimization of short reinforcement fibers using finite element analyses. The composite is modeled by a representative element composed of a single short fiber embedded in PMMA matrix. In contrast to most previous work on this subject, contact elements are employed between the fiber and the matrix to model a low strength interface. Most models assume a perfect bond. Residual stress, due to matrix cure shrinkage and/or thermal stresses, is also included in the model. The design objective is to improve the stiffness of the composite. The results presented show that a threaded end, short fiber results in mechanical interlock between the fibers and the PMMA matrix, which helps to bridge matrix cracks effectively and improve the stiffness of the composite.

Commentary by Dr. Valentin Fuster
2003;():867-878. doi:10.1115/DETC2003/DAC-48794.

Reasoning about relationships among design constraints can facilitate objective and effective decision making at various stages of engineering design. Exploiting dominance among constraints is one particularly strong approach to simplifying design problems and to focusing designers’ attention on critical design issues. Three distinct approaches to constraint dominance identification have been reported in the literature. We lay down the basic principles of these approaches with simple examples and we apply these methods to a practical linear electric actuator design problem. The identification of dominance along with the use of Interval Propagation and Monotonicity Analysis leads to an optimal solution for a particular design configuration of the linear actuator. Identification of dominance also provides insight into the design of linear actuators, which may lead to effective decisions at the conceptual stage of the design.

Topics: Linear motors , Design
Commentary by Dr. Valentin Fuster
2003;():879-889. doi:10.1115/DETC2003/DAC-48795.

A continuum-based shape and configuration design sensitivity analysis method for a finite deformation elastoplastic shell structure with frictionless contact has been developed. Shell elastoplasticity is treated based on the projection method that performs the return mapping on the subspace defined by the zero-normal stress condition. An incrementally objective integration scheme is used in the context of finite deformation shell analysis, wherein stress objectivity is preserved for finite rotation increments. The penalty regularization method is used to approximate the contact variational inequality. The material derivative concept is used to develop continuum based design sensitivity. The design sensitivity equation is solved without iteration at each converged load step. Numerical implementation of the proposed shape and configuration design sensitivity analysis is carried out using the meshfree method. The accuracy and efficiency of the proposed method is illustrated using numerical examples.

Commentary by Dr. Valentin Fuster
2003;():891-899. doi:10.1115/DETC2003/DAC-48796.

When a complex electromechanical system fails, the troubleshooting procedure adopted is often complex and tedious. No standard methods currently exist to optimize the sequence of steps in a troubleshooting process. The ad hoc methods generally followed are less than optimal methods and can result in high maintenance costs. This paper describes the use of behavioral models and multistage decision-making models in Bayesian networks for representing the troubleshooting process. It discusses advantages in using these methods and the difficulties in implementing them. An approximate method to obtain optimal decision sequence for a troubleshooting process on a complex electromechanical system is also described.

Commentary by Dr. Valentin Fuster
2003;():901-906. doi:10.1115/DETC2003/DAC-48797.

We present an optimization method of shock absorbers for analyzing the effect of mass variation of an impacting body on the response of a system including the shock absorber, such as landing gears of aircraft, elevators and coupling devices for railroad cars. The system including the optimum semi-active shock absorber is compared with those including two kinds of optimum passive shock absorbers regarding the variation of the acceleration of the impacting body. The maximum value and each of the maximum accelerations of the different masses of the impacting body are set as objective functions to be minimized, respectively. The design variables of these optimizations are reciprocals of resisting coefficients of the shock absorber. As a result of the optimizations, it is clarified that the optimum semi-active shock absorber can cope with the mass variation of the impacting body better than the optimum passive shock absorbers.

Commentary by Dr. Valentin Fuster
2003;():907-915. doi:10.1115/DETC2003/DAC-48798.

Global optimization of mechanical design problems using heuristic methods such as Simulated annealing (SA) and genetic algorithms (GAs) have been able to find global or near-global minima where prior methods have failed. The use of these nongradient based methods allow the broad efficient exploration of multimodal design spaces that could be continuous, discrete or mixed. From a survey of articles in the ASME Journal of Mechanical Design over the last 10 years, we have observed that researchers will typically run these algorithms in continuous mode for problems that contain continuous design variables. What we suggest in this paper is that computational efficiencies can be significantly increased by discretizing all continuous variables, perform a global optimization on the discretized design space, and then conduct a local search in the continuous space from the global minimum discrete state. The level of discretization will depend on the complexity of the problem, and becomes an additional parameter that needs to be tuned. The rational behind this assertion is presented, along with results from four test problems.

Commentary by Dr. Valentin Fuster
2003;():917-925. doi:10.1115/DETC2003/DAC-48799.

Optimal packaging problems occur in many industries and lend themselves to be analyzed by computer-based approaches due to their fast and effective solutions. Unlike other packaging in commercial industries, turret packaging has been done mainly by hand because positioning equipment within the turret with crew is hard to be automated. However, recent development of automated ammunition loading system makes crew unnecessary in turret and this makes it possible to automate the packaging process. This paper introduces a computational methodology that uses genetic algorithms as a search algorithm and solves tank turret packaging problems by integrating part placements with performance analysis of design requirements.

Topics: Packaging
Commentary by Dr. Valentin Fuster
2003;():927-934. doi:10.1115/DETC2003/DAC-48800.

In some cases of developing a new product, response surface of an objective function is not always single peaked function, and it is often multi-peaked function. In that case, designers would like to have not oniy global optimum solution but also as many local optimum solutions and/or quasi-optimum solutions as possible, so that he or she can select one out of them considering the other conditions that are not taken into account priori to optimization. Although this information is quite useful, it is not that easy to obtain with a single trial of optimization. In this study, we will propose a screening of fitness function in genetic algorithms (GA). Which change fitness function during searching. Therefore, GA needs to have higher flexibility in searching. Genetic Range Genetic Algorithms include a number of searching range in a single generation. Just like there are a number of species in wild life. Therefore, it can arrange to have both global searching range and also local searching range with different fitness function. In this paper, we demonstrate the effectiveness of the proposed method through a simple benchmark test problems.

Commentary by Dr. Valentin Fuster
2003;():935-944. doi:10.1115/DETC2003/DAC-48801.

In this paper, a global optimization technique based on the Adaptive Response Surface Method (ARSM) is integrated with a Control Volume Finite Element Method (CVFEM) for thermofluid optimization. The objective of the optimization is to improve the thermal effectiveness of an aircraft de-icing strategy by re-designing the cooling bay surface shape. By optimizing objective function in terms of the de-icing strategy and shape of the intake scoop, the best performance of the helicopter engine is achieved. This design problem is implemented on two different physical models. One model involves a heat conduction finite element analysis (FEA) process and the other combines the heat conduction and potential fluid flow FEA processes. Based on the comparison between the ARSM predicted results and the plotted objective function, it is observed that the integrated technique provides an effective method for thermofluid optimization. It also shows that the ARSM has a good flexibility to work with the computationally intensive process, e.g. CVFEM, and, potentially, could be developed and applied to the multidisciplinary design optimization (MDO) due to its open structure.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2003;():945-950. doi:10.1115/DETC2003/DAC-48802.

In design problems, designers have to decide many properties of products to satisfy requirements from users or market. The designers also have to consider the environment under the use of the products and the environment is often unpredictable or difficult to be determined in detail. The optimization techniques are useful to support the designers to decide the properties of the product. However, it is required before the application of the optimization techniques to formulate mathematical models and it is difficult to formulate the all properties of products, for examples preferences of the customer. In this situation, it seems to be useful to derive several solutions that equip the variety or diversity about the value of design variables or objective functions. In this paper, the new method to derive several solutions using immune algorithms is described. The proposed method equips the interaction mechanism between the design parameter and the environment parameters. Through some numerical examples of the structural design problems and job-shop scheduling problems, the effectiveness is confirmed.

Topics: Algorithms , Design
Commentary by Dr. Valentin Fuster
2003;():951-960. doi:10.1115/DETC2003/DAC-48803.

Design optimization is becoming and increasingly important tool for design. In order to have an impact on the product development process it must permeate all levels of the design in such a way that a holistic view is maintained through all stages of the design. One important area is in the case of optimization based on simulation, which generally requires non-gradient methods and as a consequence direct-search methods is a natural choice. The idea in this paper is to use the design optimization approach in the optimization algorithm itself in order to produce an efficient and robust optimization algorithm. The result is a single performance index to measure the effectiveness of an optimization algorithm, and the COMPLEX-RF optimization algorithm, with optimized parameters.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster
2003;():961-967. doi:10.1115/DETC2003/DAC-48804.

A global optimization method for continuous design variables called as Generalized Random Tunneling Algorithm is proposed. Proposed method is called as “Generalized” random tunneling algorithm because this method can treat the behavior constraints as well as the side constraints. This method consists of three phases, that is, the minimization phase, the tunneling phase, and the constraints phase. In the minimization phase, mathematical programming is used, and heuristic approach is introduced in the tunneling and constraint phases. By iterating these phases, global minimum is found. The characteristics of mathematical programming and heuristic approaches are included in the proposed method. Global minimum which may lie on the boundary of constraints is easily found by proposed method. Proposed method is applied to mathematical and structural optimization problems. Through numerical examples, the effectiveness and validity of proposed method have been confirmed.

Commentary by Dr. Valentin Fuster
2003;():969-974. doi:10.1115/DETC2003/DAC-48805.

Application of the compliant design methodology to manipulators has held the promise of delivering manipulators with many significant advantages, including low cost, small size, low backlash and friction, and high positioning accuracy. This approach has been demonstrated in part by Canfield et. al., [1] to a class of three-degree-of-freedom manipulators based on a specific parallel architecture topology. In [1], the authors’ intent was to develop two compliant manipulators that exhibit several of the features associated with compliant devices. However, upon review of the manipulators resulting from this work it is observed that many of the benefits that were expected were lost at some point in the design process, resulting in manipulators that were large, expensive and suffered significantly from required assembly and inaccuracies in manufacture. This paper will revisit the problem addressed in [1], using the modeling tools demonstrated in that paper but will present several improved development measures that will result in manipulators that exhibit multiple features promised by compliant devices. The resulting manipulators will then be compared against the manipulators from [1] with a summary of the performance and characteristics of each given and evaluated.

Topics: Cycles , Manipulators
Commentary by Dr. Valentin Fuster
2003;():975-985. doi:10.1115/DETC2003/DAC-48806.

Discrete parameterization using full or partial ground structures of truss/frame elements is not appropriate for domain representation as they do not map all points in the continuum and can lead to dangling or overlapping elements in the optimal topology. Existing continuum parameterization using units cells with holes, ranked microstructures or penalized Young’s modulus (SIMP model) mainly have problems like the appearance of checkerboard patterns and stiffness singularity regions. This is probably due to point-contact between diagonally placed cells and such regions can be avoided by using higher order elements, perimeter constraints or filtering schemes which result in additional computational load on the optimization procedures. An edge connectivity throughout is ensured when using a honeycomb representation with staggered regular hexagonal cells. In this paper, such a parameterization is employed for topology synthesis of compliant mechanisms with flexibility-stiffness and flexibility-strength multi-criteria formulations. The material connectivity is well-defined, and checkerboard and zero-stiffness singularities are not seen in numerous examples solved with honeycomb parameterization.

Commentary by Dr. Valentin Fuster
2003;():987-998. doi:10.1115/DETC2003/DAC-48807.

In this paper, meso and micro scale electro-thermally compliant electro mechanical systems are synthesized for strength with polysilicon as the structural material. Local temperature and/or stress constraints are imposed in the topology optimization formulation. This is done to keep the topology thermally intact and also to keep local stresses below their allowable limit. Relaxation performed on both temperature and stress constraints allows them to be ignored when the material densities approach their non-existing states. Noting that both local constraints can be large in number with the number of cells, an active constraint strategy is employed. Honeycomb parameterization, which is a staggered arrangement of hexagonal cells, is used to represent the design region. This ensures at least a common edge between any two neighboring cells and thus avoids the appearance of both checkerboard and zero-stiffness singularities without any additional computational load.

Commentary by Dr. Valentin Fuster
2003;():999-1007. doi:10.1115/DETC2003/DAC-48808.

A new type of compliant approximate dwell mechanism design is introduced. This new compliant mechanism includes an initially straight flexible beam and a flexible arc. Approximate dwell motion is obtained by incorporating the snap-through buckling motion of the flexible arc. Load-deflection curves of flexible mechanism components are modeled by fitting polynomials to the analytical nonlinear large deflection response. The kinematic synthesis of the compliant mechanism is done quasi-statically using loop closure theory and large deflection relations of flexible parts.

Topics: Buckling , Mechanisms
Commentary by Dr. Valentin Fuster
2003;():1009-1018. doi:10.1115/DETC2003/DAC-48809.

The use of Coulomb’s friction law with the principles of classical rigid body dynamics introduces mathematical inconsistencies. Specifically, the forward dynamics problem can have no solutions or multiple solutions. In these situations an explicit model of the contact compliance at the contact point can resolve these difficulties. In this paper, we introduce a distributed compliant model for dynamic simulation. In contrast to the rigid body model and the lumped model, our approach models each contact as a finite patch and uses half space approximation to derive solutions for the small deformations and force distributions in the contact patch. This approach leads to a linear complementarity problem formulation for the contact dynamics. The existence of an unique solution can be proved for both the lumped model in the point contact case and the more accurate, distributed model. Simulation algorithm that incorporates compliant contact models and linear complementarity theory are created and demonstrated through numerical examples.

Topics: Simulation
Commentary by Dr. Valentin Fuster
2003;():1019-1024. doi:10.1115/DETC2003/DAC-48810.

A dimensional synthesis procedure to achieve prescribed roll center height variation of a vehicle’s sprung mass with respect to wheel jounce-rebound is presented. This may be used to size the relative lengths of the control arms of a short-long arm suspension mechanism in order to (i) fix the roll center with respect to ground, or (ii) fix the roll center relative to the sprung mass, or (iii) have the roll center move at a prescribed rate relative to the sprung mass, during wheel jounce-rebound. These design selections have a significant impact on the ride-handling characteristics of a vehicle. Numerical examples are provided to demonstrate the synthesis procedure.

Commentary by Dr. Valentin Fuster
2003;():1025-1032. doi:10.1115/DETC2003/DAC-48811.

In this paper we introduce a robust algorithm to solve the five-attitude spherical Burmester problem associated with exact linkage synthesis. The proposed algorithm solves for the unit vectors determining the four joint centers of the linkage while taking into account all redundant equations available, which enhances the robustness of the algorithm. In order to show the applicability of the proposed algorithm and to validate its robustness, two examples are included.

Commentary by Dr. Valentin Fuster
2003;():1033-1039. doi:10.1115/DETC2003/DAC-48812.

This paper presents a design for a reconfigurable packaging system that can handle cartons of different shape and sizes and is amenable to ever changing demands of packaging industries for perfumery and cosmetic products. The system takes structure of a multi-fingered robot hand, which can provide fine motions, and dexterous manipulation capability that may be required in a typical packaging-assembly line. The paper outlines advanced modeling and simulation undertaken to design the packaging system and discusses the experimental work carried out. The new packaging system is based on the principle of reconfigurability, that shows adaptability to simple as well as complex carton geometry. The rationale of developing such a system is presented with description of its human equivalent. The hardware and software implementations are also discussed together with directions for future research.

Topics: Design , Packaging
Commentary by Dr. Valentin Fuster
2003;():1041-1047. doi:10.1115/DETC2003/DAC-48813.

This paper examine the geometric design of the five degree-of-freedom RPS serial chain. This constrained robot can be designed to reach an arbitrary set of ten spatial positions. It is often convenient to consider tasks with fewer positions, and here we study the cases of seven through ten position synthesis. A generalized eigenvalue elimination technique yields analytical solutions for cases seven and eight. While cases nine and ten are solved numerically using homotopy continuation. An numerical example is provided for an eight position task.

Topics: Chain
Commentary by Dr. Valentin Fuster
2003;():1049-1057. doi:10.1115/DETC2003/DAC-48814.

In this paper we present a novel dyad dimensional synthesis technique for approximate motion synthesis. The methodology utilizes an analytic representation of the dyad’s constraint manifold that is parameterized by its dimensional synthesis variables. Nonlinear optimization techniques are then employed to minimize the distance from the dyad’s constraint manifold to a finite number of desired locations of the workpiece. The result is an approximate motion dimensional synthesis technique that is applicable to planar, spherical, and spatial dyads. Here, we specifically address the planar RR, spherical RR and spatial CC dyads since these are often found in the kinematic structure of robotic systems and mechanisms. These dyads may be combined serially to form a complex open chain (e.g. a robot) or when connected back to the fixed link they may be joined so as to form one or more closed chains (e.g. a linkage, a parallel mechanism, or a platform). Finally, we present some initial numerical design case studies that demonstrate the utility of the synthesis technique.

Topics: Motion , Chain , Fittings , Manifolds
Commentary by Dr. Valentin Fuster
2003;():1059-1067. doi:10.1115/DETC2003/DAC-48815.

Synthesizing a motion generating 3-jointed planar chain, under no additional constraints, is trivial. Given a set of desired planar rigid body positions, one can select via straightforward geometric considerations the locations of the revolute (R) joints and prismatic (P) joints of a chain that will reach the positions. On the other hand, specifying constraints on joint limitations or physical parameters may result in no chains that reach the desired positions. In this paper, we study a rigid body in a set of positions in order to determine the point on the body that lies nearest a point, circle or line. Note that the point, circle or line is unknown and is determined as part of the process. The set of points formed by the rigid body point in all of its positions defines a workspace for the outermost moving pivot of the chain. By fitting a generic RPR, PRR or RRR chain’s workspace to these points, we can suggest nearly minimal joint constraints and physical parameters.

Topics: Motion , Chain
Commentary by Dr. Valentin Fuster
2003;():1069-1077. doi:10.1115/DETC2003/DAC-48816.

This paper considers the design of the cylindric PRS serial chain. This five degree-of-freedom robot can be designed to reach an arbitrary set of eight spatial positions. However, it is often convenient to choose some of the design parameters and specify a task with fewer positions. For this reason, we study the three through eight position synthesis problems and consider various choices of design parameters for each. A linear product decomposition is used to obtain bounds on the number of solutions to these design problems. For all cases of six or fewer positions, the bound is exact and we give a reduction of the problem to the solution of an eigenvalue problem. For seven and eight position tasks, the linear product decomposition is useful for generating a start system for solving the problems by continuation. The large number of solutions so obtained contraindicates an elimination approach for seven or eight position tasks, hence continuation is the preferred approach.

Topics: Chain , Design
Commentary by Dr. Valentin Fuster
2003;():1079-1085. doi:10.1115/DETC2003/DAC-48817.

The effect of weights on curves and surface design is a well-researched topic in Computer Aided Geometric Design (CAGD). However, the influence of weights in the realm of rational motion approximation and interpolation has been largely unexplored. In this paper, we present a thorough mathematical exposition on the influence of weights on rational motion design. This leads naturally to providing the motion designer with guidelines on how to use weights for rational motion design. Several examples are presented towards the end.

Topics: Motion , Design
Commentary by Dr. Valentin Fuster
2003;():1087-1094. doi:10.1115/DETC2003/DAC-48818.

This paper presents the kinematic synthesis of a CRR serial chain. This is a four-degree-of-freedom chain constructed from a cylindric joint and two revolute joints in series. The design equations for this chain are obtained from the dual quaternion kinematics equations evaluated at a specified set of task positions. In this case, we find that the chain is completely defined by seven task positions. Furthermore, our solution of these equations has yielded 52 candidate designs, so far; there may be more. This synthesis methodology shows promise for the design of constrained serial chains.

Topics: Chain
Commentary by Dr. Valentin Fuster
2003;():1095-1106. doi:10.1115/DETC2003/DAC-48819.

The identification of principal twists of the end-effector of a manipulator undergoing multi-degree-of-freedom motion is considered to be one of the central problems in kinematics. In this paper, we use dual velocity vectors to parameterize se(3), the space of twists, and define an inner product of two dual velocities as a dual number analog of a Riemannian metric on SE(3). We show that the principal twists can be obtained from the solution of an eigenvalue problem associated with this dual metric. It is shown that the computation of principal twists for any degree-of-freedom (DoF) of rigid-body motion, requires the solution of at most a cubic dual characteristic equation. Furthermore, the special nature of the coefficients yields simple analytical expressions for the roots of the dual cubic, and this in turn leads to compact analytical expressions for the principle twists. We also show that the method of computation allows us to separately identify the rotational and translational degrees-of-freedom lost or gained at singular configurations. The theory is applicable to serial, parallel, and hybrid manipulators, and is illustrated by obtaining the principal twists and singular directions for a 3-DoF parallel, and a hybrid 6-DoF manipulator.

Topics: Manipulators
Commentary by Dr. Valentin Fuster
2003;():1107-1115. doi:10.1115/DETC2003/DAC-48820.

This paper presents a simple but effective type synthesis method for spatial parallel mechanisms with three translational degrees of freedom based on the screw theory. Firstly all possible connecting-chain structures of three-DOF parallel mechanisms are enumerated. According to the reciprocal relationship between screw constraint forces and the motion screw, a novel synthesis method is presented. By using this method, type synthesis for three-DOF translational parallel mechanisms has been made in a systematic and detailed way. As a result, some novel parallel mechanisms generating spatial translation have been obtained. To verify the significance of type synthesis for this kind of mechanism, the paper also gives a concrete application instance, which is used for a micromanipulator for manipulating the bio-cells.

Commentary by Dr. Valentin Fuster
2003;():1117-1123. doi:10.1115/DETC2003/DAC-48821.

The instantaneous forward problem (IFP) singularities of a parallel manipulator (PM) must be determined during the manipulator design and avoided during the manipulator operation, because they are configurations where the end-effector pose (position and orientation) cannot be controlled by acting on the actuators any longer, and the internal loads of some links become infinite. When the actuators are locked, PMs become structures consisting of one rigid body (platform) connected to another rigid body (base) by means of a number of kinematic chains (limbs). The geometries (singular geometries) of these structures where the platform can perform infinitesimal motion correspond to the IFP singularities of the PMs the structures derive from. This paper studies the singular geometries both of the PS-2RS structure and of the 2PS-RS structure. In particular, the singularity conditions of the two structures will be determined. Moreover, the geometric interpretation of their singularity conditions will be provided. Finally, the use of the obtained results in the design of parallel manipulators which become either PS-2RS or 2PS-RS structures, when the actuators are locked, will be illustrated.

Commentary by Dr. Valentin Fuster
2003;():1125-1133. doi:10.1115/DETC2003/DAC-48822.

The closed-loop structure of a parallel robot results in complex kinematic singularities in the workspace. Singularity analysis become important in design, motion, planning, and control of parallel robot. The traditional method to determine a singular configurations is to find the determinant of the Jacobian matrix. However, the Jacobian matrix of a parallel manipulator is complex in general, and thus it is not easy to find the determinant of the Jacobian matrix. In this paper, we focus on the singularity analysis of a novel 4-DOFs parallel robot H4 based on screw theory. Two types singularities, i.e., the forward and inverse singularities, have been identified.

Topics: Robots , Screws
Commentary by Dr. Valentin Fuster
2003;():1135-1142. doi:10.1115/DETC2003/DAC-48823.

We present hardware results of a planar, translational cable-direct-driven robot (CDDR). The motivation behind this work was to present kinematics and statics modeling of the CDDR along with the method to maintain positive cable tension and implement them on CDDR hardware for experimental verification. Only translational CDDR is considered in this article; we attempt to keep zero orientation by control. We ignore gravity here because the end-effector is supported on a base plate with negligible friction. Results are presented and analyzed for two linear profiles and one circular profile.

Topics: Robots , Cables , Hardware
Commentary by Dr. Valentin Fuster
2003;():1143-1147. doi:10.1115/DETC2003/DAC-48824.

In this paper, an NC interpolation algorithm for a tripod-based parallel kinematic machine is investigated. The algorithm can be implemented in two steps, the rough interpolation in the Cartesian space and the precise interpolation in the actuator space. The upper bound of the theoretical interpolation error due to the interpolation algorithm in the precise interpolation and nonlinear mapping is analyzed. The representation of the interpolation error distribution within the Cartesian space is depicted in terms of the variations of the interpolation period and the programming velocity. It was concluded that this error is sufficiently small and may be neglected.

Commentary by Dr. Valentin Fuster
2003;():1149-1158. doi:10.1115/DETC2003/DAC-48825.

Selecting a configuration for a machine tool that will best suit the needs of a forecast set of requirements can be a difficulty and costly exercise. This problem can now be addressed using an integrated virtual validation system. The system includes kinematic/dynamic analysis, kinetostatic model, CAD module, FEM module, CAM module, optimization module and visual environment for simulation and collision detection of the machining and deburring. It is an integration of the parallel kinematic machines (PKM) design, analysis, optimization and simulation. In this paper, the integrated virtual system is described in a detail, a prototype of a 3-dof PKM is modeled, analyzed, optimized and remote controlled with the proposed system. Some results and simulations are also given. Its effectiveness is shown with the results obtained by NRC-IMTI during the design of the 3-dof NRC PKM.

Topics: Machinery , Design
Commentary by Dr. Valentin Fuster
2003;():1159-1164. doi:10.1115/DETC2003/DAC-48826.

In this paper an optimization method based on the Mechanics of Parallel Robots and orientated on workspace is conducted in the construction of 6-HTRT parallel robot. By analyzing the characteristics of specific workspace and setting up objective functions, optimizations are implemented on the design of parallel robot. As a result of the optimization design, the parallel robot not only figures the minimum overall size of robot structural, but also has workspace unrestricted by the limit range of Hooke joint’s conical angles. The restriction factors on workspace of 6-HTRT parallel robot are reduced thus the algorithm for motion control of the robot is simplified and the performance of the parallel mechanism is improved.

Topics: Robots , Design , Optimization
Commentary by Dr. Valentin Fuster
2003;():1165-1174. doi:10.1115/DETC2003/DAC-48827.

A novel cable-based metrology system is presented wherein six cables are connected in parallel from ground-mounted string pots to the moving object of interest. Cartesian pose can be determined for feedback control and other purposes by reading the lengths of the six cables via the string pots and using closed-form forward pose kinematics. This paper focuses on a sculpting metrology tool, assisting a human artist in generating a piece from a computer model, but applications exist in manufacturing, rapid prototyping, robotics, and automated construction. The proposed real-time cable-based metrology system is less complex and more economical than existing commercial Cartesian metrology technologies.

Topics: Cables , Metrology
Commentary by Dr. Valentin Fuster
2003;():1175-1184. doi:10.1115/DETC2003/DAC-48828.

A method to determine optimal placement of smart (active) materials based actuators in the structure of robot manipulators for the purpose of achieving higher operating speed and tracking precision is developed. The method is based on the evaluation of the transmissibility of the displacement from the integrated smart actuators to the robot manipulator joint and end-effector displacements. By studying the characteristics of the Jacobian of the mapping function between the two displacements for a given position of the robot manipulator, the optimal positioning of the smart actuators that provides maximum effectiveness in eliminating high harmonics of the joint motion or the end-effector motion is determined. In robots with serial and parallel kinematics chains containing non-prismatic joints, due to their associated kinematics nonlinearity, if the joint motions were synthesized with low harmonic trajectories, the end-effector trajectory would contain high harmonics of the joint motions. Alternatively, if the end-effector motion were synthesized with low harmonic motions, due to the inverse kinematics nonlinearity, the actuated joint trajectories would contain a significant high harmonic component. As the result, the operating speed and tracking precision are degraded. By integrating smart materials based actuators in the structure of robot manipulators to provide small amplitude and higher frequency motions, the high harmonic component of the actuated joint and/or the end-effector motions are eliminated. As the result, higher operating speed and tracking precision can be achieved.

Commentary by Dr. Valentin Fuster
2003;():1185-1190. doi:10.1115/DETC2003/DAC-48829.

This paper introduces the XZ Micropositioning Mechanism (XZMM) that is fabricated in the x-y plane and translates components in the x-z direction using one linear input. The positioning platform of the mechanism remains parallel to the substrate throughout its motion. The XZMM has been tested and actuated using thermal actuation and achieves an out-of-plane output displacement of 41 micrometers with a 27 micrometer x-direction input.

Topics: Mechanisms
Commentary by Dr. Valentin Fuster
2003;():1191-1198. doi:10.1115/DETC2003/DAC-48830.

A purely analytical method has been developed for the kinetic (force or kinetostatic) analysis of frictionless planar mechanisms. It employs polar notation of vectors, the principle of conservation of energy and the force equilibrium of the links. Unlike many other methods which lead to a system of several simultaneous equations, it leads to only one algebraic or one vectorial equation at a time and, interestingly, it is less time consuming that the conventional graphical methods. The method is general, comprehensive and systematic such that it could also serve as a suitable teaching technique for manual approach to the problem. It easily lends itself to automation too.

Topics: Mechanisms
Commentary by Dr. Valentin Fuster
2003;():1199-1206. doi:10.1115/DETC2003/DAC-48831.

The basic tool of path or motion generation synthesis for more than four prescribed positions is analytical calculation, but its process is quite complicated and far from straightforward. A novel computer simulation mechanism of six-bar linkage for path or motion generation synthesis is presented in this paper. In the case of five-precision points, using the geometric constraint and dimension-driving techniques, a primary simulation mechanism of four-bar linkage is created. Based on the different tasks of path and motion generation for kinematic dimensional synthesis, the simulation mechanisms of path and motion generation with Stephenson I, II and Watt six-bar linkages are developed from the primary simulation mechanism. The results of kinematic synthesis for five prescribed positions prove that the mechanism simulation approach is not only fairly quick and straightforward, but is also advantageous from the viewpoint of accuracy and repeatability.

Commentary by Dr. Valentin Fuster
2003;():1207-1211. doi:10.1115/DETC2003/DAC-48832.

A special class of planar and spatial linkage mechanisms is presented in which for a continuous full rotation or continuous rocking motion of the input link, the output link undergoes two continuous rocking motions. In a special case of such mechanisms, for periodic motions of the input link with a fundamental frequency ω, the output motion is periodic but with a fundamental frequency of 2ω. In this paper, the above class of linkage mechanisms are referred to as speed-doubling linkage mechanisms. Such mechanisms can be cascaded to provide further doubling of the fundamental frequency (rocking motion) of the output motion. They can also be cascaded with other appropriate linkage mechanisms to obtain crank-rocker or crank-crank type of mechanisms. The conditions for the existence of speed-doubling linkage mechanisms are provided and their mode of operation is described in detail. Such speed-doubling mechanisms have practical applications, particularly when higher output speeds are desired, since higher output motions can be achieved with lower input speeds. Such mechanisms also appear to generally have force transmission and dynamics advantages over regular mechanisms designed to achieve similar output speeds.

Topics: Linkages , Mechanisms
Commentary by Dr. Valentin Fuster
2003;():1213-1219. doi:10.1115/DETC2003/DAC-48833.

The paper begins with a graphical technique to locate the pole; i.e., the point in the plane of motion which is coincident with the instantaneous center of zero velocity of the coupler link. Since the single flier linkage is indeterminate, the Aronhold-Kennedy theorem cannot locate this instantaneous center of zero velocity. The technique that is presented here is believed to be an original contribution to the kinematics literature and will provide geometric insight into the velocity analysis of an indeterminate linkage. The paper then presents an analytical method, referred to as the method of kinematic coefficients, to determine the radius of curvature and the center of curvature of the path traced by an arbitrary coupler point of the single flier eight-bar linkage. This method has proved useful in curvature theory since it separates the geometric effects of the linkage from the operating speed of the linkage.

Topics: Linkages
Commentary by Dr. Valentin Fuster
2003;():1221-1230. doi:10.1115/DETC2003/DAC-48834.

A modular robot system is a collection of actuators, links, and connections that can be arbitrarily assembled into a number of different robot configurations and sequences. High performance modular robots require more than just sophisticated controls. They also need top-quality mechanical components. Bearings in particular must operate well at low speed, have high rotational accuracy, be compact for low weight, and especially be stiff for high positional accuracy. To ensure the successful use of bearings in precision modular robots, knowledge of the bearing properties and requirements are investigated. Background information on various topics such as modular robots, precision modular actuators, and their error sources are described with respect to precision engineering. Extensive literature on thin section bearings is reviewed to examine their use in precision robotic applications. Theoretical studies are performed to calculate bearing stiffness adopting a methodology based on Hertzian theory. This approach is applied to analyze two proposed designs of equivalent-sized crossed roller and four-point bearings, principal bearings used for transmitting all the payload and mass of the robot structure. The maximum deflections and contact stresses for the proposed actuator assembly and loading conditions are estimated and compared including a range of general bearing properties such as friction, cost, and shock resistance.

Commentary by Dr. Valentin Fuster
2003;():1231-1237. doi:10.1115/DETC2003/DAC-48835.

This article explores the effect that end-effector velocities have on a nonredundant robotic manipulator’s ability to accelerate its end-effector as well as to apply forces/moments to the environment at the end-effector. The velocity effects considered here are the Coriolis and Centrifugal forces, and the reduction of actuator torque with rotor velocity, as described by the speed-torque curve. Analysis of these effects is accomplished using optimization techniques, where the problem formulation consists of a cost function and constraints which are all purely quadratic forms, yielding a nonconvex problem. An analytical solution, based on the dialytic elimination technique, is developed which guarantees that the globally optimal solution can be found. The PUMA 560 manipulator is used as an example to illustrate this methodology.

Commentary by Dr. Valentin Fuster
2003;():1239-1248. doi:10.1115/DETC2003/DAC-48836.

A new analytical method for determining, describing, and visualizing the solution space for the contact force distribution of multi-limbed robots with three feet in contact with the environment in three-dimensional space is presented. The foot contact forces are first resolved into strategically defined foot contact force components to decouple them for simplifying the solution process, and then the static equilibrium equations are applied to find certain contact force components and the relationship between the others. Using the friction cone equation at each foot contact point and the known contact force components, the problem is transformed into a geometrical one to find the ranges of contact forces and the relationship between them that satisfy the friction constraint. Using geometric properties of the friction cones and by simple manipulation of their conic sections, the whole solution space which satisfies the static equilibrium and friction constraints at each contact point can be found. Two representation schemes, the “force space graph” and the “solution volume representation,” are developed for describing and visualizing the solution space which gives an intuitive visual map of how well the solution space is formed for the given conditions of the system.

Commentary by Dr. Valentin Fuster
2003;():1249-1258. doi:10.1115/DETC2003/DAC-48837.

One of the inherent problems of multi-limbed mobile robotic systems is the problem of multi-contact force distribution; the contact forces and moments at the feet required to support it and those required by its tasks are indeterminate. A new strategy for choosing an optimal solution for the contact force distribution of multi-limbed robots with three feet in contact with the environment in three-dimensional space is presented. The optimal solution is found using a two-step approach: first finding the description of the entire solution space for the contact force distribution for a statically stable stance under friction constraints, and then choosing an optimal solution in this solution space which maximizes the objectives given by the chosen optimization criteria. An incremental strategy of opening up the friction cones is developed to produce the optimal solution which is defined as the one whose foot contact force vector is closest to the surface normal vector for robustness against slipping. The procedure is aided by using the “force space graph” which indicates where this solution is positioned in the solution space to give insight into the quality of the chosen solution and to provide robustness against disturbances. The “margin against slip with contact point priority” approach is also presented which finds an optimal solution with different priorities given to each foot contact point for the case when one foot is more critical than the other. Examples are presented to illustrate certain aspects of the method and ideas for other optimization criteria are discussed.

Topics: Force , Mobile robots
Commentary by Dr. Valentin Fuster
2003;():1259-1270. doi:10.1115/DETC2003/DAC-48838.

In this paper, an efficient dynamic simulation algorithm is developed for an Unmanned Underwater Vehicle (UUV) with a N degrees of freedom manipulator. In addition to the effects of mobile base, the various hydrodynamic forces exerted on these systems in an underwater environment are also incorporated into the simulation. The effects modeled in this work are added mass, viscous drag, fluid acceleration, and buoyancy forces. Also the dynamics of thrusters are developed, and an appropriate mapping matrix dependent on the position and orientation of the thrusters on the vehicle, is used to calculate resultant forces and moments of the thrusters on the center of gravity of the vehicle. It should be noted that hull-propeller and propeller-propeller interactions are considered in the modeling too. Finally the results of the simulations are presented.

Commentary by Dr. Valentin Fuster
2003;():1271-1278. doi:10.1115/DETC2003/DAC-48839.

In this research we investigate the design of nesting forces for exactly constrained, robust mechanical assemblies. Exactly constrained assemblies have a number of important advantages including the ability to assemble over a wide range of conditions. Such designs often require nesting forces to keep the design properly seated. To date, little theory has been developed for the design of nesting forces. We show how the effects of tolerances on nesting forces, a key issue, can be analyzed and apply the analysis to a simple design problem. For the example problem, good agreement is achieved with results from Monte Carlo simulation.

Topics: Force , Design
Commentary by Dr. Valentin Fuster
2003;():1279-1287. doi:10.1115/DETC2003/DAC-48840.

In recent years a number of practicing engineers have discussed the virtues of exactly constrained (EC) mechanical assemblies. While found by engineers in industry to have many benefits, EC designs remain somewhat unrecognized by academia. One reason for this minimal exposure may be the lack of a mathematical foundation for such designs. EC designs can be analyzed quite simply by understanding that they are statically determinate. This paper describes the history and current background for EC designs. It also begins to develop the mathematical foundation for EC design based on equations of equilibrium. Finally, it examines a Monte Carlo simulation of the effects of variation on EC assemblies vs. over-constrained assemblies. The EC design assembles 100% of the time, while the over-constrained design assembles only 50% of the time with greater error.

Commentary by Dr. Valentin Fuster
2003;():1289-1297. doi:10.1115/DETC2003/DAC-48841.

This work is a part of a larger research project to understand the challenges in creating MultiStable Equilibrium (MSE) devices. MSE devices are those that have more than one stable position or configuration which can be maintained with no power input. The study of potential energy minima in magnetic systems can be used to create novel and efficient MSE mechanisms. This research focuses on using the magnetic energy in space as the main criteria for analysis of MSE devices. Permanent magnets are used to create a 2D magnetostatic field. The magnetic energy density in air is then plotted along a path through the field. For a piece of iron following a specified path, the stable equilibrium positions correspond to the locations of maximum air energy density. Furthermore, the stiffness at each of the equilibrium positions can be correlated to a pseudo-stiffness of the energy density. This information can help one design systems for multiple stable positions with minimal computational time and effort without simplifying the problem geometry.

Commentary by Dr. Valentin Fuster
2003;():1299-1303. doi:10.1115/DETC2003/DAC-48842.

This paper deals with the geometric issues that arise in designing a system for measuring the orientation of an object in three dimensional space using a new class of wireless angular position sensors. The wireless sensors are waveguides that receive and record the electromagnetic energy emitted by a polarized RF source. The angular position of the waveguide relative to the source is indicated by the energy level. A system equipped with multiple waveguides is used as a 3D orientation sensor. This paper explores the geometry for orientation measurement using the system and provides the guidelines for sensor design.

Topics: Sensors , Geometry
Commentary by Dr. Valentin Fuster
2003;():1305-1313. doi:10.1115/DETC2003/DAC-48843.

This paper addresses similarities between various nutating or wobbling mechanisms, especially kinematic similarities. A case is made for the generalization of several mechanisms into a mechanism “class” having common kinematic characteristics. This mechanism class is shown to be typified by bevel epicyclic gear trains. It is proposed that not only kinematic analysis, but static-force, power-flow, and efficiency analyses of mechanisms belonging to this “class” can be simplified by modeling them as bevel-gear trains. Simplified kinematic, force, and efficiency analyses are demonstrated for a novel wobbling speed reducer using this concept of “equivalent” geared mechanisms. The reduction in complexity of these analyses is the main motivation for this work.

Commentary by Dr. Valentin Fuster
2003;():1315-1321. doi:10.1115/DETC2003/DAC-48844.

In this work, we investigate the geometry and position kinematics of planar parallel manipulators composed of three GPR serial sub-chains, where G denotes a rolling contact, or geared joint, P denotes a prismatic joint, and R denotes a revolute joint. The rolling contact joints provide a passive one degree-of-freedom relative motion between the base and the prismatic links. It is shown, both theoretically and numerically, that when all the G-joints have equal circular contact profiles, there are at most 48 real forward kinematic solutions when the P joints are actuated. The solution procedure is general and can be used to predict and solve for the kinematics solutions of 3-GPR manipulators with any combination of rational contact ratios.

Commentary by Dr. Valentin Fuster
2003;():1323-1329. doi:10.1115/DETC2003/DAC-48845.

The robust design of a novel mobile robot, which comprises two driving wheels and an intermediate body carrying the payload, is the subject of this paper. We prove that, by virtue of the robot architecture, the kinetostatic model of the system is isotropic. Moreover, regarding the robot dynamic response, a robust design problem is formulated by minimizing the design bandwidth of the generalized inertia matrix of the robot over its architecture parameters. Furthermore, design conditions are given for the robot performance in trajectory-tracking to be feasible. Finally, a numerical comparison of two design solutions, one feasible and one robust, is provided by means of simulation runs. We demonstrate that the robust design solution doubles robot performance in trajectory-tracking, while reducing the oscillations of the intermediate body, by 40%, when compared with the feasible solution.

Topics: Design , Mobile robots
Commentary by Dr. Valentin Fuster
2003;():1331-1341. doi:10.1115/DETC2003/DAC-48846.

This paper deals with the kinematic analysis of a wheeled mobile robot (WMR) moving on uneven terrain. It is known in literature that a wheeled mobile robot, with a fixed length axle and wheels modeled as thin disk, will undergo slip when it negotiates an uneven terrain. To overcome slip, variable length axle (VLA) has been proposed in literature. In this paper, we model the wheels as a torus and propose the use of a passive joint allowing a lateral degree of freedom. Furthermore, we model the mobile robot, instantaneously, as a hybrid-parallel mechanism with the wheel-ground contact described by differential equations which take into account the geometry of the wheel, the ground and the non-holonomic constraints of no slip. Simulation results show that a three-wheeled WMR can negotiate uneven terrain without slipping. Our proposed approach presents an alternative to variable length axle approach.

Commentary by Dr. Valentin Fuster
2003;():1343-1350. doi:10.1115/DETC2003/DAC-48847.

Traction drive systems offer unique advantages over geared systems. They will typically run quieter and they can be designed to eliminate all backlash. Furthermore, rolling elements are easy to manufacture and the rolling motion will produce very efficient power transmission. In this paper the authors describe a two-stage, self-actuating, traction drive system that has been fabricated to produce a speed ratio of 50:1. Given specific values for coefficients of friction, the geometry of each stage of the device must be designed to ensure self-actuation. In addition, dimensions of the drive rollers and output rings for the two stages must be selected to ensure that one stage does not cause the other stage to overrun.

Topics: Design , Traction
Commentary by Dr. Valentin Fuster
2003;():1351-1358. doi:10.1115/DETC2003/DAC-48851.

The goal of this research is to obtain the optimum design of a new interbody fusion implant for use in lumbar spine fixation. A new minimally invasive surgical technique for interbody fusion is currently in development. The procedure makes use of an interbody implant that is inserted between two vertebral bodies. The implant is packed with bone graft material that fuses the motion segment. The implant must be capable of retaining bone graft and supporting spinal loads while fusion occurs. Finite element-based optimization techniques are used to drive the design. The optimization process is performed in two stages: topology optimization and then shape optimization. The different load conditions analyzed include: flexion, extension, and lateral bending.

Commentary by Dr. Valentin Fuster
2003;():1359-1368. doi:10.1115/DETC2003/DAC-48852.

Most of the algorithmic engineering design optimisation approaches reported in the literature aims to find the best set of solutions within a quantitative (QT ) search space of the given problem while ignoring related qualitative (QL ) issues. These QL issues can be very important and by ignoring them in the optimisation search, can have expensive consequences especially for real world problems. This paper presents a new integrated design optimisation approach for QT and QL search space. The proposed solution approach is based on design of experiment methods and fuzzy logic principles for building the required QL models, and evolutionary multi-objective optimisation technique for solving the design problem. The proposed technique was applied to a two objectives rod rolling problem. The results obtained demonstrate that the proposed solution approach can be used to solve real world problems taking into account the related QL evaluation of the design problem.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In