0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2014;():V02AT00A001. doi:10.1115/DETC2014-NS2A.
FREE TO VIEW

This online compilation of papers from the ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2014) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Application-Tailored Optimization Methods

2014;():V02AT03A001. doi:10.1115/DETC2014-34345.

In this paper, we propose a level-set based topology optimization method for designing a reactor, which is used as a part of the DC-DC converter in electric and hybrid vehicles. Since it realizes a high-power driving motor and its performance relies on its component, i.e., reactor core, it is valuable to establish a reasonable design method for the reactor core. Boundary tracking type level-set topology optimization is suitable for this purpose, because the shape and topology of the target structure is clearly represented by the zero boundary of the level-set function, and the state variables are accurately computed using the zero boundary tracking mesh. We formulate the design problem on the basis of electromagnetics, and derive the design sensitivities. The derived sensitivities are linked with boundary tracking type level-set topology optimization, and as a result, a useful structural optimization method for the reactor core design problem is developed.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A002. doi:10.1115/DETC2014-34622.

Level-set methods are domain classification techniques that are gaining popularity in the recent years for structural topology optimization. Level sets classify a domain into two or more categories (such as material and void) by examining the value of a scalar level-set function (LSF) defined in the entire design domain. In most level-set formulations, a large number of design variables, or degrees of freedom is used to define the LSF, which implicitly defines the structure. The large number of design variables makes non-gradient optimization techniques all but ineffective. Kriging-interpolated level sets (KLS) on the other hand are formulated with an objective to enable non-gradient optimization by defining the design variables as the LSF values at few select locations (knot points) and using a Kriging model to interpolate the LSF in the rest of the design domain. A downside of concern when adopting KLS, is that using too few knot points may limit the capability to represent complex shapes, while using too many knot points may cause difficulty for non-gradient optimization. This paper presents a study of the effect of number and layout of the knot points in KLS on the capability to represent complex topologies in single and multi-component structures. Image matching error metrics are employed to assess the degree of mismatch between target topologies and those best-attainable via KLS. Results are presented in a catalogue-style in order to facilitate appropriate selection of knot-points by designers wishing to apply KLS for topology optimization.

Topics: Topology
Commentary by Dr. Valentin Fuster
2014;():V02AT03A003. doi:10.1115/DETC2014-34624.

Optimum selection of cutting conditions in high-speed and ultra-precision machining processes often poses a challenging task due to several reasons; such as the need for costly experimental setup and the limitation on the number of experiments that can be performed before tool degradation starts becoming a source of noise in the readings. Moreover, oftentimes there are several objectives to consider, some of which may be conflicting, while others may be somewhat correlated. Pareto-optimality analysis is needed for conflicting objectives; however the existence of several objectives (high-dimension Pareto space) makes the generation and interpretation of Pareto solutions difficult. The approach adopted in this paper is a modified multi-objective efficient global optimization (m-EGO). In m-EGO, sample data points from experiments are used to construct Kriging meta-models, which act as predictors for the performance objectives. Evolutionary multi-objective optimization is then conducted to spread a population of new candidate experiments towards the zones of search space that are predicted by the Kriging models to have favorable performance, as well as zones that are under-explored. New experiments are then used to update the Kriging models, and the process is repeated until termination criteria are met. Handling a large number of objectives is improved via a special selection operator based on principle component analysis (PCA) within the evolutionary optimization. PCA is used to automatically detect correlations among objectives and perform the selection within a reduced space in order to achieve a better distribution of experimental sample points on the Pareto frontier. Case studies show favorable results in ultra-precision diamond turning of Aluminum alloy as well as high-speed drilling of woven composites.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A004. doi:10.1115/DETC2014-35320.

When developing a first-generation product, an iterative approach often yields the shortest time-to-market. In order to optimize its performance, however, a fundamental understanding of the theory governing its operation becomes necessary. This paper details the optimization of the Tata Swach, a consumer water purifier produced for India. The primary objective of the work was to increase flow rate while considering other factors such as cost, manufacturability, and efficacy. A mathematical model of the flow characteristics through the filter was developed. Based on this model, a design tool was created to allow designers to predict flow behavior without prototyping, significantly reducing the necessity of iteration. Sensitivity analysis was used to identify simple ways to increase flow rate as well as potential weak points in the design. Finally, it was demonstrated that maximum flow rate can be increased by 50% by increasing the diameter of a flow-restricting feature while simultaneously increasing the length of the active purification zone. This can be accomplished without significantly affecting cost, manufacturability, and efficacy.

Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Artificial Intelligence and Computational Synthesis

2014;():V02AT03A005. doi:10.1115/DETC2014-34065.

We describe a trainable, hand drawn, single stroke 3D sketch–based classification system, using a motion detecting depth sense camera. Our system captures data from a user, who is free to sketch any desired shape in a 3D environment. The overall system is based on a set of previously defined and well developed classifiers, which are, the Rubine Classifier, $1 recognizer and the Image based classifier. The novelty of this paper comes from 1) the classification of sketches drawn in a 3D environment; 2) extending the pixel based image representation to a voxel–based scheme; and 3) combining the results from individual classifiers using a sensitivity matrix. To evaluate the performance of the system, user studies were performed. To validate the significance of results obtained from the user studies, we performed a t–test. Our system outperforms the individual classifiers and is able to achieve an average overall accuracy of 93+%.

Topics: Shapes
Commentary by Dr. Valentin Fuster
2014;():V02AT03A006. doi:10.1115/DETC2014-34256.

A novel parameterization concept for structural truss topology optimization is presented in this article that enables the use of evolutionary algorithms in design of large-scale structures. The representational power of Boolean networks is used here to parameterize truss topology. A genetic algorithm then operates on parameters that govern the generation of truss topologies using this random network instead of operating directly on design variables. A genetic algorithm implementation is also presented that is congruent with the local rule application of the random network. The primary advantage of using a Boolean random network representation is that a relatively large number of ground structure nodes can be used, enabling successful exploration of a large-scale design space. In the classical binary representation of ground structures, the number of optimization variables increases quadratically with the number of nodes, restricting the maximum number of nodes that can be considered using a ground structure approach. The Boolean random network representation proposed here allows for the exploration of the entire topology space in a systematic way using only a linear number of variables. The number of nodes in the design domain, therefore, can be increased significantly. Truss member geometry and size optimization is performed here in a nested manner where an inner loop size optimization problem is solved for every candidate topology using sequential linear programming with move-limits. The Boolean random network and nested inner-loop optimization allows for the concurrent optimization of truss topology, geometry, and size. The effectiveness of this method is demonstrated using a planar truss design optimization benchmark problem.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A007. doi:10.1115/DETC2014-34691.

Computational Design Synthesis (CDS) is used to enable the computer to generate valid and even creative solutions for an engineering task. Graph grammars are a CDS approach in which engineering knowledge is formalized using graphs to represent designs and rules that describe possible graph transformations, i.e. changes of designs. For most engineering tasks two different kinds of rules are required: rules that change the topology and rules that change parameters of a design. One of the main challenges in CDS using both topologic and parametric rules is to decide a priori which type of rule to apply in which stage of the synthesis process. The research presented in this paper describes different strategies for the combination of topologic and parametric rules during automated design synthesis. A graph grammar for the design of gearboxes is investigated in which topologic rules change the structure, i.e. the number and connections of gears and shafts, whereas parametric rules change the layout and sizing, i.e. the dimensions and positions of gears and shafts, in the gearbox. For the generation of new designs, two simple multi-objective stochastic search algorithms are used and compared. Four different strategies are presented that determine in different ways which type of rule (topologic or parametric) to apply in which stage of the synthesis process. The presented strategies are compared considering the quantity of the generated designs, i.e. the number of topologically different designs, and their quality, i.e. their objective function values. Results show a significant influence of the chosen strategy only in an early stage of the synthesis process. The discussion examines the adaptability of the proposed strategies to other engineering tasks.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V02AT03A008. doi:10.1115/DETC2014-34858.

Product alternatives suggested by a generative design system often need to be evaluated on qualitative criteria. This evaluation necessitates that several feasible solutions which fulfill all technical constraints can be proposed to the user of the system. Also, as concept development is an iterative process, it is important that these solutions are generated quickly; i.e., the system must have a low convergence time. A problem, however, is that stochastic constraint-handling techniques can have highly unpredictable convergence times, spanning several orders of magnitude, and might sometimes not converge at all. A possible solution to avoid the lengthy runs is to restart the search after a certain time, with the hope that a new starting point will lead to a lower overall convergence time, but selecting an optimal restart-time is not trivial. In this paper, two strategies are investigated for such selection, and their performance is evaluated on two constraint-handling techniques for a product design problem. The results show that both restart strategies can greatly reduce the overall convergence time. Moreover, it is shown that one of the restart strategies can be applied to a wide range of constraint-handling techniques and problems, without requiring any fine-tuning of problem-specific parameters.

Topics: Design
Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Data-Driven Design

2014;():V02AT03A009. doi:10.1115/DETC2014-34424.

The amount of user-generated content related to consumer products continues to grow as users increasingly take advantage of forums, product review sites, and social media platforms. The content is a promising source of insight into users’ needs and experiences. However, the challenge remains as to how concise and useful insights can be extracted from large quantities of unstructured data. We propose a visualization tool which allows designers to quickly and intuitively sift through large amounts of user-generated content and derive useful insights regarding users’ perceptions of product features. The tool leverages machine learning algorithms to automate labor-intensive portions of the process, and no manual labeling is required by the designer. Language processing techniques are arranged in a novel way to guide the designer in selecting the appropriate inputs, and multidimensional scaling enables presentation of the results in concise 2D plots. To demonstrate the efficacy of the tool, a case study is performed on action cameras. Product reviews from Amazon.com are analyzed as the user-generated content. Results from the case study show that the tool is helpful in condensing large amounts of user-generated content into useful insights, such as the key differentiations that users perceive among similar products.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A010. doi:10.1115/DETC2014-34491.

Design-by-analogy is an effective approach to innovative concept generation, but can be elusive at times due to the fact that few methods and tools exist to assist designers in systematically seeking and identifying analogies from general data sources, databases, or repositories, such as patent databases. A new method for extracting analogies from data sources has been developed to provide this capability. Building on past research, we utilize a functional vector space model to quantify analogous similarity between a design problem and the data source of potential analogies. We quantitatively evaluate the functional similarity between represented design problems and, in this case, patent descriptions of products. We develop a complete functional vocabulary to map the patent database to applicable functionally critical terms, using document parsing algorithms to reduce text descriptions of the data sources down to the key functions, and applying Zipf’s law on word count order reduction to reduce the words within the documents. The reduction of a document (in this case a patent) into functional analogous words enables the matching to novel ideas that are functionally similar, which can be customized in various ways. This approach thereby provides relevant sources of design-by-analogy inspiration. Although our implementation of the technique focuses on functional descriptions of patents and the mapping of these functions to those of the design problem, resulting in a set of analogies, we believe that this technique is applicable to other analogy data sources as well. As a verification of the approach, an original design problem for an automated window washer illustrates the distance range of analogical solutions that can be extracted, extending from very near-field, literal solutions to far-field cross-domain analogies. Finally, a comparison with a current patent search tool is performed to draw a contrast to the status quo and evaluate the effectiveness of this work.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V02AT03A011. doi:10.1115/DETC2014-34753.

Predictive design analytics is a new paradigm to enable design engineers to extract knowledge from large-scale, multi-dimensional, unstructured, volatile data, and transform the knowledge and its trend into design decision making. Predictive, data-driven family design (PDFD) is proposed as one of the predictive design analytics methods to tackle some issues in family design. First, a number and specifications of product architectures are determined by data (not by pre-defined market segments) in order to maximize expected profit. A trade-off between price and cost in terms of the quantity and specifications of architectures helps to set the target in the enterprise level. k-means clustering is used to find architectures that minimize within architecture sum of squared errors. Second, a price prediction method as a function of product performance and deviations between performance and customer requirements is suggested with exponential smoothing based on innovations state space models. Regression coefficients are treated as customer preferences over product performance, and analyzed as a time series. Prediction intervals are proposed to show market uncertainties. Third, multiple values for common parameters in family design can be identified using the expectation maximization clustering so that multiple-platform design can be explored. Last, large-scale data can be handled by the PDFD algorithm. A data set which contains a total of 14 million instances is used in the case study. The design of a family of universal electronic motors demonstrates the proposed approach and highlights its benefits and limitations.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V02AT03A012. doi:10.1115/DETC2014-35440.

Motivated by continued interest within the design community to model design preferences, this paper investigates the question of predicting preferences with particular application to consumer purchase behavior: How can we obtain high prediction accuracy in a consumer preference model using market purchase data? To this end, we employ sparse coding and sparse restricted Boltzmann machines, recent methods from machine learning, to transform the original market data into a sparse and high-dimensional representation. We show that these ‘feature learning’ techniques, which are independent from the preference model itself (e.g., logit model), can complement existing efforts towards high-accuracy preference prediction. Using actual passenger car market data, we achieve significant improvement in prediction accuracy on a binary preference task by properly transforming the original consumer variables and passenger car variables to a sparse and high-dimensional representation.

Topics: Preferences
Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Decision Making in Engineering Design

2014;():V02AT03A013. doi:10.1115/DETC2014-34432.

The Change Prediction Method is an approach that has been proposed in the literature as a way to assess the risk of change propagation. This approach requires experts to define the elements of the design structure matrix and provide both impact and likelihood values for each subsystem interaction. The combined risk values produced by the Change Prediction Method indicate where high probabilities of propagation may exist, but the results rely heavily on the supplied expert data. This study explores how potential variability in expert data impacts the rank order of returned risk values from the Change Prediction Method. Results are presented that indicate significant changes in rank order, highlighting both the importance of expert data accuracy and the insights that can be gained from the Change Prediction Method as a design tool.

Topics: Design , Probability , Risk
Commentary by Dr. Valentin Fuster
2014;():V02AT03A014. doi:10.1115/DETC2014-34443.

When most designers set out to develop a new product they solicit feedback from potential consumers. These data are incorporated into the design process in an effort to more effectively meet customer requirements. Often these data are used to construct a model of consumer preference capable of evaluating candidate designs. Although the mechanics of these models have been extensively studied there are still some open questions, particularly with respect to models of aesthetic preference. When constructing preference models, simplistic product representations are often favored over high fidelity product models in order to save time and expense. This work investigates how choice of product representation can affect model performance in visual conjoint analysis. Preference models for a single product, a table knife, are derived using three different representation schemes; simple sketches, solid models, and 3D printed models. Each of these representations is used in a separate conjoint analysis survey. The results from this study showed that consumer responses were inconsistent and potentially contradictory between different representations. Consequently, when using conjoint analysis for product innovation, obtaining a true understanding of consumer preference requires selecting representations based on how accurately they convey the product details in question.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A015. doi:10.1115/DETC2014-34586.

When discussing Arrow’s Impossibility Theorem (AIT) in engineering design, we find that one condition, Independence of Irrelevant Alternatives (IIA), has been misunderstood generally. In this paper, two types of IIA are distinguished. One is based on Kenneth Arrow (IIA-A) that concerns the rationality condition of a collective choice rule (CCR). Another one is based on Amartya Sen (IIA-S) that is a condition for a choice function (CF). Through the analysis of IIA-A, this paper revisits three decision methods (i.e., Pugh matrix, Borda count and Quality Function Deployment) that have been criticized for their failures in some situations. It is argued that the violation of IIA-A does not immediately imply irrationality in engineering design, and more detailed analysis should be applied to examine the meaning of “irrelevant information”. Alternatively, IIA-S is concerned with the transitivity of CF, and it is associated with contraction consistency (Property α) and expansion consistency (Property β). It is shown that IIA-A and IIA-S are technically distinct and should not be confused in the rationality arguments. Other versions of IIA-A are also introduced to emphasize the significance of mathematical clarity in the discussion of AIT-related issues.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A016. doi:10.1115/DETC2014-34871.

This article discusses a design methodology for a Decision Support System (DSS) in the area of Data-Driven Management (DDM). We partition the DSS into an offline and an online system. Through rigorous testing, the offline system finds the best combination of Data Mining (DM) and Artificial Intelligence (AI) algorithms. Only the best algorithms are used in the online system to extract information from data and to make sense of this information by providing an objective second opinion on a decision result. To support the proposed design methodology, we construct a DSS that uses DM methods for market segmentation and AI methods for product positioning. As part of the offline system construction, we evaluate four intrinsic dimension estimation, three dimension reduction and four clustering algorithms. The performance is evaluated with statistical methods, silhouette mean and 10-fold stratified cross validated classification accuracy. We find that every DSS problem requires us to search a suitable algorithm structure, because different algorithms, for the same task, have different merits and shortcomings and it is impossible to know a priory which combination of algorithms gives the best results. Therefore, to select the best algorithms is empirical science where the possible combinations are tested. With this study, we deliver a blueprint on how to construct a DSS for product positioning. The proposed design methodology can be easily adopted to serve in a wide range of DDM problems.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A017. doi:10.1115/DETC2014-35572.

Complex system design problems tend to be high dimensional and nonlinear, and also often involve multiple objectives and mixed-integer variables. Heuristic optimization algorithms have the potential to address the typical (if not most) characteristics of such complex problems. Among them, the Particle Swarm Optimization (PSO) algorithm has gained significant popularity due to its maturity and fast convergence abilities. This paper seeks to translate the unique benefits of PSO from solving typical continuous single-objective optimization problems to solving multi-objective mixed-discrete problems, which is a relatively new ground for PSO application. The previously developed Mixed-Discrete Particle Swarm Optimization (MDPSO) algorithm, which includes an exclusive diversity preservation technique to prevent premature particle clustering, has been shown to be a powerful single-objective solver for highly constrained MINLP problems. In this paper, we make fundamental advancements to the MDPSO algorithm, enabling it to solve challenging multi-objective problems with mixed-discrete design variables. In the velocity update equation, the explorative term is modified to point towards the non-dominated solution that is the closest to the corresponding particle (at any iteration). The fractional domain in the diversity preservation technique, which was previously defined in terms of a single global leader, is now applied to multiple global leaders in the intermediate Pareto front. The multi-objective MDPSO (MO-MDPSO) algorithm is tested using a suite of diverse benchmark problems and a disc-brake design problem. To illustrate the advantages of the new MO-MDPSO algorithm, the results are compared with those given by the popular Elitist Non-dominated Sorting Genetic Algorithm-II (NSGA-II).

Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Design and Optimization of Sustainable Energy Systems

2014;():V02AT03A018. doi:10.1115/DETC2014-34227.

Computer models are very important to planning, operation, and control of power system. Although elements such as generators and transmission lines have been relatively well understood, developing a comprehensive power system model is a daunting task because challenges associated with loads modeling (they change all the time and utilities have very little control on). Unfortunately, inaccurate load models have serious implications such as unsafe operating conditions, power outages, under-utilization of system capacity, or inappropriate capital investment. This paper presents the use of state-of-the art Bayesian calibration framework for simultaneous load model selection and calibration. The approach aims at identifying configuration and reducing parameters uncertainty of the Western Electricity Coordinating Council’s (WECC) composite load model in the presence of measured field data. The success of the approach is illustrated with synthetic field data and a simplified model.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A019. doi:10.1115/DETC2014-34471.

The global quest for energy sustainability has motivated the development of technology for efficiently transforming various natural resources into energy. Combining these alternative energy sources with existing power systems requires systematic assessments and planning. The present study investigates the conversion of an existing power system into one with a wind-integrated microgrid. The standard approach applies wind resource assessment to determine suitable wind farm locations with high potential energy and then develops specific dispatch strategies to meet the power demand for the wind-integrated system with low cost, high reliability, and low impact on the environment. However, the uncertainty in wind resource results in fluctuating power generation. The installation of additional energy storage devices is thus needed in the dispatch strategy to ensure a stable power supply. The present work proposes a design procedure for obtaining the optimal sizing of wind turbines and storage devices considering wind resource assessment and dispatch strategy under uncertainty. Two wind models are developed from real-world wind data and apply in the proposed optimization framework. Based on comparisons of system reliability between the optimal results and real operating states, an appropriate wind model can be chosen to represent the wind characteristics of a particular region. Results show that the trend model of wind data is insufficient for wind-integrated microgrid planning because it does not consider the large variation of wind data. The wind model should include the uncertainties of wind resource in the design of a wind-integrated microgrid system to ensure high reliability of optimal results.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A020. doi:10.1115/DETC2014-34598.

Modeling and unit-cost optimization of a water-heated humidification-dehumidification (HDH) desalination system were presented in previous work of the authors. The system controlled the saline water flow rate to prevent salts from precipitating at higher water temperatures. It was then realized that this scheme had a negative impact on condensation performance when the controlled flow rate was not sufficiently high. This work builds on the previous system by disconnecting the condenser from the saline water cycle and by introducing a solar air heater to further augment the humidification performance. In addition, improved models for the condenser and the humidifier were used to obtain more accurate productivity estimations. The Heuristic Gradient Projection (HGP) optimization procedure was also refactored to result in reduced number of function evaluations to reach the global optimum compared to Genetic Algorithms (GA’s). A case study which assumes a desalination plant on the Red Sea near the city of Hurghada is presented. The unit-cost of produced fresh water for the new optimum system is $0.5/m3 compared to $5.9/m3 for the HDH system from previous work and less than the reported minimum cost of reverse osmoses systems.

Topics: Solar energy
Commentary by Dr. Valentin Fuster
2014;():V02AT03A021. doi:10.1115/DETC2014-34731.

Reverse osmosis (RO) is one of the main commercial technologies for desalination of water with salinity content too high for human consumption in order to produce fresh water. RO may hold promise for remote areas with scarce fresh water resources, however, RO energy requirements being in the form of electric power have few options in such areas. Fortunately, scarce rainfall is often associated with abundant sunshine, which makes solar photovoltaic (PV) power an attractive option. Equipping a photovoltaic powered reverse osmosis (PV-RO) desalination plant with battery storage has an advantage of steadier and longer hours of operation, thereby making better use of the investments in RO system components, but the additional cost from including batteries may end up increasing the overall cost of fresh water. It is therefore of paramount importance to consider the overall cost-effectiveness of the PV-RO system when designing the desalination plant. Recent work by the authors has generalized the steady operation model of RO systems to hourly adjusted power-dispatch via a proportional-derivative (PD) controller that depends on the state of charge (SOC) of the battery, yet the operating conditions; namely pressure and flow for a given power dispatch were only empirically selected. This paper considers a multi-level optimization model for PV-RO systems with battery storage by considering a “sub-loop” optimization of the feed pressure and flow given power dispatch for a fixed RO system configuration, as well as a “top-level” optimization where the system configuration itself is adjusted by the design variables. Effect of the sub-loop optimization is assessed by comparing the obtained cost of fresh water with the previous empirically adjusted system for locations and weather conditions near the city of Hurgada on the Red Sea.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A022. doi:10.1115/DETC2014-34775.

The variable and uncertain nature of wind generation presents a new concern to power system operators. One of the biggest concerns associated with integrating a large amount of wind power into the grid is the ability to handle large ramps in wind power output. Large ramps can significantly influence system economics and reliability, on which power system operators place primary emphasis. The Wind Forecasting Improvement Project (WFIP) was performed to improve wind power forecasts and determine the value of these improvements to grid operators. This paper evaluates the performance of improved short-term wind power ramp forecasting. The study is performed for the Electric Reliability Council of Texas (ERCOT) by comparing the experimental WFIP forecast to the current short-term wind power forecast (STWPF). Four types of significant wind power ramps are employed in the study; these are based on the power change magnitude, direction, and duration. The swinging door algorithm is adopted to extract ramp events from actual and forecasted wind power time series. The results show that the experimental short-term wind power forecasts improve the accuracy of the wind power ramp forecasting, especially during the summer.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A023. doi:10.1115/DETC2014-34970.

Traditionally viewed as mere energy consumers, buildings have in recent years adapted, capitalizing on smart grid technologies and distributed energy resources to not only efficiently use energy, but to also output energy. This has led to the development of net-zero energy buildings, a concept which encapsulates the synergy of energy efficient buildings, smart grids, and renewable energy utilization to reach a balanced energy budget over an annual cycle. This work looks to further expand on this idea, moving beyond just individual buildings and considering net-zero at a community scale. We hypothesize that applying net-zero concepts to building communities, also known as building clusters, instead of individual buildings will result in cost effective building systems which in turn will be resilient to power disruption. To this end, this paper develops an intelligent energy optimization algorithm for demand side energy management, taking into account a multitude of factors affecting cost including comfort, energy price, Heating, Ventilation, and Air Conditioning (HVAC) system, energy storage, weather, and on-site renewable resources. A bi-level operation decision framework is presented to study the energy tradeoffs within the building cluster, with individual building energy optimization on one level and an overall net-zero energy optimization handled on the next level. The experimental results demonstrate that the proposed approach is capable of significantly shifting demand, and when viable, reducing the total energy demand within net-zero building clusters. Furthermore, the optimization framework is capable of deriving Pareto solutions for the cluster which provide valuable insight for determining suitable energy strategies.

Topics: Optimization
Commentary by Dr. Valentin Fuster
2014;():V02AT03A024. doi:10.1115/DETC2014-35032.

Large-scale desalination plants are complex systems with many inter-disciplinary interactions and different levels of sub-system hierarchy. Advanced complex systems design tools have been shown to have a positive impact on design in aerospace and automotive, but have generally not been used in the design of water systems. This work presents a multi-disciplinary design optimization approach to desalination system design to minimize the total water production cost of a 30,000m3/day capacity reverse osmosis plant situated in the Middle East, with a focus on comparing monolithic with distributed optimization architectures. A hierarchical multi-disciplinary model is constructed to capture the entire system’s functional components and subsystem interactions. Three different multi-disciplinary design optimization (MDO) architectures are then compared to find the optimal plant design that minimizes total water cost. The architectures include the monolithic architecture multidisciplinary feasible (MDF), individual disciplinary feasible (IDF) and the distributed architecture analytical target cascading (ATC). The results demonstrate that an MDF architecture was the most efficient for finding the optimal design, while a distributed MDO approach such as analytical target cascading is also a suitable approach for optimal design of desalination plants, but optimization performance may depend on initial conditions.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A025. doi:10.1115/DETC2014-35038.

Photovoltaic reverse osmosis (PVRO) systems can provide a viable clean water source for many remote communities. To be cost-effective, PVRO systems need to be custom-tailored for the local water demand, solar insolation, and water characteristics. Designing a custom system composed of modular components is not simple due to the large number of design choices and the variations in the sunlight and demand. This paper presents a modular design architecture, which when implemented on a low-cost PC, would enable users to configure systems from inventories of modular components. The method uses a hierarchy of filters or design rules, which can be provided in the form of an expert system, to limit the design space. The architecture then configures a system from the reduced design space using a genetic algorithm to minimize the system lifetime cost subject to system constraints. The genetic algorithm uses a detailed cost model and physics-based PVRO system model which determines the ability of the system to meet demand. Determining the ability to meet demand is challenging due to variations in water demand and solar radiation. Here, the community’s historical water demand, solar radiation history, and PVRO system physics are used in a Markov model to quantify the ability of a system to meet demand or the loss-of-water probability (LOWP). Case studies demonstrate the approach and the cost-reliability trade-off for community-scale PVRO systems. In addition, long-duration simulations are used to demonstrate the Markov model appropriately captures the uncertainty.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A026. doi:10.1115/DETC2014-35176.

This paper provides justification for solar-powered electrodialysis desalination systems for rural Indian villages. It is estimated that 11% of India’s 800 million people living in rural areas do not have access to an improved water source. If the source’s quality in regards to biological, chemical, or physical contaminants is also considered, this percentage is even higher. User interviews conducted by the authors and in literature reveal that users judge the quality of their water source based on its aesthetic quality (taste, odor, and temperature). Seventy-three percent of Indian villages rely on groundwater as their primary drinking supply. However, saline groundwater underlies approximately 60% of the land area in India. Desalination is necessary in order to improve the aesthetics of this water (by reducing salinity below the taste threshold) and remove contaminants that cause health risks.

Both technical and socioeconomic factors were considered to identify the critical design requirements for inland water desalination in India. An off-grid power system is among those requirements due to the lack of grid access or intermittent supply, problems faced by half of Indian villages. The same regions in India that have high groundwater salinity also have the advantage of high solar potential, making solar a primary candidate. Within the salinity range of groundwater found in inland India, electrodialysis would substantially reduce the energy consumption to desalinate compared to reverse osmosis, which is the standard technology used for village-level systems. This energy savings leads to a smaller solar array required for electrodialysis systems, translating to reduced capital costs.

Topics: Solar energy
Commentary by Dr. Valentin Fuster
2014;():V02AT03A027. doi:10.1115/DETC2014-35215.

Wind turbine tower design looks primarily at the structural integrity and durability of the tower. Optimization techniques are sometimes employed to maximize the loading capability while reducing material use and cost. Still, the tower is a dynamic part of a complex wind energy conversion system. During system operation, the tower is excited and sways back and forth. This undesirable movement increases cyclical loading on the tower and drivetrain components. To minimize this motion the tower frequency must be offset from the natural frequency of other components. Hence, it is necessary to look at the relationships that exist between the tower and other wind turbine components, such as the rotor, nacelle, and foundation. In addition, tradeoffs between cost, structural performance, and environmental impact can be examined to guide the designer toward a truly sustainable alternative to fossil fuels. Ultimately, an optimal design technique can be implemented and used to automate tower design. This work will introduce the analytical model and decision-making architecture that can be used to incorporate greater considerations in future studies. In this paper, nine wind turbine tower designs with different materials and geometries are analyzed using Finite Element Analysis (FEA). The optimal tower design is selected using a multi-level variation of the Hypothetical Equivalents and Inequivalents Method (HEIM). Using this analysis, a steel tower with variable thickness has been chosen. The findings reaffirm that steel is a favorable choice for turbine tower construction as it performs well on environmental, performance, and cost objectives. The method proposed in this work can be expanded to examine additional design goals and present a higher fidelity model of the wind turbine tower system in future work.

Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Design for Market Systems

2014;():V02AT03A028. doi:10.1115/DETC2014-34368.

The market is a complex system with many different stakeholders and interactions. A number of decisions within this system affect the design of new products, not only from design teams but also from consumers, producers, and policy-makers. Market systems studies have shown how profit-optimal producer decisions regarding product design and pricing can influence a number of different factors including the quality, environmental impact, production costs, and ultimately consumer demand for the product. This study models the ways that policies and consumer demand combine in a market systems framework to influence optimal product design and, in particular, product quality and environmental sustainability. Implementing this model for the design of a mobile phone case shows how different environmental impact assessment methods, levels of taxation, and factors introduced to the consumer decision-making process will influence producer profits and overall environmental impacts. This demonstrates how different types of policies might be evaluated for their effectiveness in achieving economic success for the producer and reduced environmental impacts for society, and a “win-win” scenario was uncovered in the case of the mobile phone.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A029. doi:10.1115/DETC2014-34493.

Consideration set formation using non-compensatory screening rules is a vital component of real purchasing decisions with decades of experimental validation. Marketers have recently developed statistical methods that can estimate quantitative choice models that include consideration set formation via non-compensatory screening rules. But is capturing consideration within models of choice important for design? This paper reports on a simulation study of a vehicle portfolio design when households screen over vehicle body style built to explore the importance of capturing consideration rules for optimal designers. We generate synthetic market share data, fit a variety of discrete choice models to this data, and then optimize design decisions using the estimated models. Model predictive power, design “error”, and profitability relative to ideal profits are compared as the amount of market data available increases. We find that even when estimated compensatory models provide relatively good predictive accuracy, they can lead to sub-optimal design decisions when the population uses consideration behavior; convergence of compensatory models to non-compensatory behavior is likely to require unrealistic amounts of data; and modeling heterogeneity in non-compensatory screening is more valuable than heterogeneity in compensatory trade-offs. This supports the claim that designers should carefully identify consideration behaviors before optimizing product portfolios. We also find that higher model predictive power does not necessarily imply better design decisions; that is, different model forms can provide “descriptive” rather than “predictive” information that is useful for design.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A030. doi:10.1115/DETC2014-34643.

Mapua Institute of Technology has been constantly engaged in providing free, renewable energy to rural and under privileged communities. Guided by the mission and vision of the School of Mechanical and Manufacturing Engineering Department and of the Office of Social Orientation and Community Involvement, the school had implemented several renewable energy activities. This paper showcases 8 different projects — 6 hydropower plant projects, 1 human kinetic harvesting demonstration facility, and 1 wind turbine project. In this paper, implemented projects are presented briefly with emphasis on the different locations, local cultural settings and different experiences encountered. It will also share how students have changed from being participants to autonomous implementers of renewable energy projects for communities.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A031. doi:10.1115/DETC2014-34699.

Product design decision makers are frequently challenged by the difficulty of ensuring compatibility of parts sourced from various suppliers, including the services that the product is designed to integrate. The crux of the difficulty is in analyzing the ability of sourced parts (and services) to interoperate under uncertainty, and the impact of such compatibility on the overall marketing objectives. Therefore, the decisions in a design for market system problem can be closely related to the considerations along both the upstream (e.g., suppliers) and downstream (e.g., service providers and customers) market systems. This paper fills a gap in the existing research by exploring a design decision method that integrates upstream and downstream market systems with interoperability considerations. The proposed method is based on a mathematical model and metric for interoperability presented for the first time in the literature and particularly in the context of engineering design. The design decision framework is demonstrated using three examples: a mechanical design tolerance problem, a power tool design problem, and a tablet computer design problem. The mechanical design problem demonstrates how the interoperability metric can be used as a new way of analyzing tolerances in mechanical systems. The power tool design example involves an integration of upstream and downstream market systems for design selection. The tablet computer design selection problem considers not only the upstream suppliers but also customers and digital service providers along the downstream market system.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V02AT03A032. doi:10.1115/DETC2014-34745.

This paper explores opportunities for reductions in lifecycle greenhouse gas (GHG) emissions through adoption of electric drive vehicles (EDV), including hybrid, plug-in hybrid and battery electric vehicles. EDVs have generally lower GHG emission rates during operation than similar-class conventional vehicles (CV). However, a key observation is that GHG reductions per mile are much larger during city driving conditions than on the highway. An examination of the estimated GHG emissions is conducted for city and highway driving conditions for several CV and EDV models based on testing results from the US Environmental Protection Agency (EPA), then compared with key findings from the 2009 National Household Travel Survey (NHTS 2009). Through an empirical analysis of actual driving patterns in the U.S., this study highlights potential missed opportunities to reduce transportation GHG emissions through the allocation of incentives and/or regulations. Key findings include the significant potential to reduce GHG emissions of taxis and delivery vehicles, as well as driving pattern-based incentives for individual vehicle owners.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A033. doi:10.1115/DETC2014-34790.

Conjoint analysis from marketing has been successfully integrated with engineering analysis in design for market systems. The long questionnaires needed for conjoint analysis in relatively complex design decisions can become cumbersome to the human respondents. This paper presents an adaptive questionnaire generation strategy that uses active learning and allows incorporation of engineering knowledge in order to identify efficiently designs with high probability to be optimal. The strategy is based on viewing optimal design as a group identification problem. A running example demonstrates that a good estimation of consumer preference is not always necessary for finding the optimal design and that conjoint analysis could be configured more effectively for the specific purpose of design optimization. Extending the proposed method beyond a homogeneous preference model and noiseless user responses is also discussed.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A034. doi:10.1115/DETC2014-35115.

About 80% of farms in India are less than five acres in size and are cultivated by farmers who use bullocks for farming operations. Even the smallest tractors available in the Indian market are too expensive and large, and not designed to meet the unique requirements of these farmers. To address these needs, we have developed a proof-of-concept lightweight (350 kg) tractor in collaboration with Mahindra and Mahindra Limited, an Indian tractor manufacturer. Given the challenges of accurately predicting traction in Indian soils by applying existing terramechanics models, an alternative design approach based on Mohr-Coulomb soil-failure criterion is presented. Analysis of weight, power and drawbar of existing tractors on the market, a single wheel traction test, and a drawbar test of a proof-of-concept small tractor prototype suggest that ∼200kg is the maximum drawbar force that could be achieved by a 350kg tractor of conventional design. In order to attain higher drawbar performance of 70% of the tractor weight needed for specific agricultural operations, additional design changes are required. An approach for increasing traction by adding tires is investigated and discussed. Additional research on weight distribution, dynamic drawbar testing and tread design is suggested as future work.

Topics: Weight (Mass)
Commentary by Dr. Valentin Fuster
2014;():V02AT03A035. doi:10.1115/DETC2014-35270.

A major barrier in consumer adoption of electric vehicles (EVs) is ‘range anxiety,’ the concern that the vehicle will run out of power at an inopportune time. Range anxiety is caused by the current relatively low electric-only operational range and sparse public charging station infrastructure. Range anxiety may be significantly mitigated if EV manufacturers and charging station operators work in partnership using a cooperative business model to balance EV performance and charging station coverage. This model is in contrast to a sequential decision making model where manufacturers bring new EVs to the market first and charging station operators decide on charging station deployment given EV specifications and market demand. This paper proposes an integrated decision making framework to assess profitability of a cooperative business models based on a multi-disciplinary optimization model that combines marketing, engineering, and operations. This model is demonstrated in a case study involving battery electric vehicle design and direct-current fast charging station location network in the State of Michigan. The expected benefits can motive both government and private enterprise actions.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A036. doi:10.1115/DETC2014-35307.

To be competitive in today’s market, firms need to offer a variety of products that appeal to a diverse set of customer needs. Product line optimization provides a simple method to design for this challenge. Using a heterogeneous customer preference model allows the optimization to better explore the diversity in the market. The optimization should also consider aesthetic, engineering, manufacturing, and marketing constraints to ensure the feasibility of the final solution. However, as more constraints are added the difficulty of the optimization increases. There is an opportunity to reduce the difficulty of the optimization by allowing the heterogeneous customer preference model to handle a subset of these constraints termed design prohibitions. Design prohibitions include component incompatibility and dependency. This paper investigates whether design prohibitions should be handled solely in the heterogeneous customer preference model, solely in the optimization formulation, or in both. The effects of including design prohibitions in the creation of a hierarchical Bayes mixed logit model and a genetic algorithm based product line optimization are explored using a bicycle case study.

Topics: Design , Optimization
Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Design for Resilience and Failure Recovery

2014;():V02AT03A037. doi:10.1115/DETC2014-34313.

A new metamodeling approach is proposed to characterize the output (response) random process of a dynamic system with random variables, excited by input random processes. The metamodel is then used to efficiently estimate the time-dependent reliability. The input random processes are decomposed using principal components or wavelets and a few simulations are used to estimate the distributions of the decomposition coefficients. A similar decomposition is performed on the output random process. A Kriging model is then built between the input and output decomposition coefficients and is used subsequently to quantify the output random process corresponding to a realization of the input random variables and random processes. In our approach, the system input is not deterministic but random. We establish therefore, a surrogate model between the input and output random processes. The quantified output random process is finally used to estimate the time-dependent reliability or probability of failure using the total probability theorem. The proposed method is illustrated with a corroding beam example.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A038. doi:10.1115/DETC2014-34550.

This paper presents a robust design framework to develop piezoelectric materials based structural sensing systems for failure diagnostics and prognostics. At first, a detectability measure is proposed to evaluate the performance of any given sensing system given various uncertainties. Thus, the censoring system design problem can be formulated to maximize the detectability of the censoring system through optimally allocating piezoelectric materials into a target structural system. Secondly, the formulated problem can be conveniently solved using reliability-based robust design framework to ensure design robustness while considering the uncertainties. Two engineering case studies are employed to demonstrate the effectiveness of the design framework in developing multifunctional material sensing systems.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A039. doi:10.1115/DETC2014-34552.

Lifecycle health management plays an increasingly important role in realizing resilience of aging complex engineered systems since it detects, diagnoses, and predicts system-wide effects of adverse events, therefore enables a proactive approach to deal with system failures. To address an increasing demand to develop high-reliability low-cost systems, this paper presents a new platform for operational stage system health management, referred to as Evolving Design Model Synchronization (EDMS), which enables health management of aging engineered systems by efficiently synchronizing system design models with degrading health conditions of actual physical system in operation. A Laplace approximation approach is employed for the design model updating, which can incorporate heterogeneous operating stage information from multiple sources to update the system design model based on the information theory, thereby increases the updating accuracy compared with traditionally used Bayesian updating methodology. The design models synchronized over time using sensory data acquired from the system in operation can thus reflect system health degradation with evolvingly updated design model parameters, which enables the application of failure prognosis for system health management. One case study is used to demonstrate the efficacy of the proposed approach for system health management.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A040. doi:10.1115/DETC2014-34558.

The concept of engineering resilience has received prevalent attention from academia as well as industry because it contributes a new means of thinking about how to withstand against disruptions and recover properly from them. Although the concept of resilience was scholarly explored in diverse disciplines, there are only few which focus on how to quantitatively measure the engineering resilience. This paper is dedicated to explore the gap between quantitative and qualitative assessment of engineering resilience in the domain of design of complex engineered systems. A conceptual framework is first proposed for the modeling of engineering resilience, and then Bayesian network is employed as a quantitative tool for the assessment and analysis of engineering resilience for complex systems. A case study related to electric motor supply chain is employed to demonstrate the proposed approach. The proposed resilience quantification and analysis approach using Bayesian networks would empower system designers to have a better grasp of the weakness and strength of their own systems against system disruptions induced by adverse failure events.

Topics: Design , Modeling , Resilience
Commentary by Dr. Valentin Fuster
2014;():V02AT03A041. doi:10.1115/DETC2014-34560.

Safe and reliable operation of lithium-ion batteries as major energy storage devices is of vital importance, as unexpected battery failures could result in enormous economic and societal losses. Accurate estimation of the state-of-charge (SoC) and state-of-health (SoH) for an operating battery system, as a critical task for battery health management, greatly depends on the validity and generalizability of battery models. Due to the variability and uncertainties involved in battery design, manufacturing, and operation, developing a generally applicable battery physical model is a big challenge. To eliminate the dependency of SoC and SoH estimation on battery physical models, this paper presents a generic self-cognizant dynamic system approach for lithium-ion battery health management, which integrates an artificial neural network (ANN) with a dual extended Kalman filter (DEKF) algorithm. The ANN is trained offline to model the battery terminal voltages to be used by the DEKF. With the trained ANN, the DEKF algorithm is then employed online for SoC and SoH estimation, where voltage outputs from the trained ANN model are used in DEKF state-space equations to replace the battery physical model. Experimental results are used to demonstrate the effectiveness of the developed self-cognizant dynamic system approach for battery health management.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A042. doi:10.1115/DETC2014-34636.

Real-time monitoring systems have been developed for tooling machine for the purpose of investigating the time-dependent cutting conditions, to detect instantaneous events, and to estimate life of cutting tools and the machine itself. An Energy-based Reliability Model (ERM) has been developed for real-time monitoring of cutting conditions. A standardized inspection process was defined and the two most sensible signals, vibrational signals and temperature increments, are collected to monitor the accumulation of dissipated energy during the tooling processes. The ERM then computes the normalized accumulative dissipated energy in replace of evaluating surface quality of workpiece at the end of each tooling process. This paper focuses on the implementation of ERM in the turning process on a lathe. The experimental results showed the dissipated energy linearly grows with respect to the amount of volume removal from the workpiece. The ERM built from the experimental results under the same condition were then utilized to estimate the turning performance under different experimental conditions. As a result, similar trends of dissipated energy versus volume removal were found. Therefore, ERM can be utilized to estimate a reliable replacement time of cutting tool in tooling machines.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A043. doi:10.1115/DETC2014-34837.

One of most important components in power generator is a stator winding since an unexpected failure of the water absorbed-winding leads to plant shut-down and substantial loss. Typically the stator winding is maintained with a time- or usage-based strategy, which could result in substantial waste of remaining life, high maintenance cost and low plant availability. Recently, the field of prognostics and health management offers general diagnostic and prognostic techniques to precisely assess the health condition and robustly predict the remaining useful life of an engineered system, with an aim to address the aforementioned deficiencies. This research aims at developing health reasoning system of power generator stator winding with physical and statistical analysis against water absorption. And it is based upon the capacitance measurements on winding insulations. In particular, a new health measure, Directional Mahalanobis Distance (DMD), is proposed to quantify the health condition. In addition, the empirical health grade system based upon the proposed technique, DMD, is carried out with the maintenance history. The smart health reasoning system is validated using eight years’ field data from eight generators, each of which contains forty two windings.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A044. doi:10.1115/DETC2014-35005.

Design of engineering resilient systems is an emerging research field. The contribution of this paper is to i) define engineering resilience on the basis of various resilience concepts in different fields; ii) propose the engineering recoverability as a new component in the framework of designing engineering resilient systems; and iii) introduce a general mathematical formulation to quantify the engineering resilience. One case study of a CNC machining system is used to demonstrate the value of designing engineering resilient systems.

Topics: Design , Resilience
Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Design for the Developing World

2014;():V02AT03A045. doi:10.1115/DETC2014-34150.

The question of how to effectively design products for consumers in the developing world has been widely debated. Several methodologies have been developed to address this issue focusing on human centered and community centered methods, but few methods are rooted in market-centered approaches. Recent advances in market-centered design from lean startup methodologies hold promise for the development of new methods that allow effective product design for consumers in the developing world. This paper contributes a method from which consumer level products can be designed to effectively supply the under-served markets of the developing world with innovative and sustainable solutions. Utilizing an iterative method based on three fundamental hypotheses, the Lean Design for Developing World Method (LDW) seeks to provide products that are economically viable, have strong market growth potential, and have a net positive impact on the customers and their communities.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A046. doi:10.1115/DETC2014-34687.

The development of energy services for the 40% of the world’s population currently living in energy poverty is a challenging design problem. There are competing and often conflicting objectives between stakeholders from global to user viewpoints, and the confounding effects of real-world performance, rebound, and stacking of technologies makes the determination of optimal strategies for off-grid village energy complicated. Yet there are holistic and lasting solutions that can adequately address the technical, social, economic, and environmental constraints and satisfy the goals of all stakeholders. These solutions can be better identified by systematically considering five major qualitative and quantitative outcomes including 1) energy access and efficiency, 2) climate benefits, 3) health impacts, 4) upfront and recurring economic and opportunity costs, and 5) quality of life for the user in terms of several metrics. Beginning with a comprehensive survey of energy uses in a village and current and potential technological options to meet those needs, this article proposes a methodology to identify and quantify these five outcomes for various intervention scenarios. These evaluations can provide a better understanding of the constraints, trade-offs, sensitivity to various factors, and conditions under which certain designs are appropriate for the village energy system. Ultimately a balance of all five objectives is most likely to result in equitable, user-driven, and sustainable solutions.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A047. doi:10.1115/DETC2014-35357.

There are currently 1.4 billion people in the world living on less than $1.25 a day. Many engineers have designed products intended to alleviate the poverty faced by these individuals but most of these products have failed to have the desired impact. This is largely because we as engineers do not clearly understand the needs of people in poverty, which is understandable as it is particularly hard to determine needs in this context. This lack of understanding is usually because the engineer and the resource-poor individuals are from different cultures, because the engineer has no personal experience with life in poverty, and because the engineer has limited access to suitable market surrogates for testing and validation. This paper presents a method for determining the needs of resource-poor individuals in the developing world. The method presented here is organized into four steps to be completed within three different stages of need finding. Engineers and designers can follow these steps to more accurately determine the needs of resource-poor individuals as they design a product. The paper also includes examples of this method being used to determine customer needs for products in India and Peru.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A048. doi:10.1115/DETC2014-35457.

Many household electronic devices — flashlights, stereos, radios — require AA, AAA, C, and D size batteries. These batteries are often disposable in remote areas of the world that lack access to grid electricity. In parts of the globe, disposable batteries can account for over 50% of household energy expenditures and amount to 25 or more batteries disposed of per person per year. This amounts to more than 25,000 batteries annually for a village of 1000 people. Solutions to this problem can address economic and environmental concerns. Replacing disposable batteries with rechargeable batteries maintained by a local entrepreneur is one business-driven method to reduce environmental waste and household energy expenditures. This study evaluates technical options for providing rechargeable batteries to a decentralized population, and introduces a prototype portable charging kit that addresses the techno-economic requirements of charging batteries, delivering batteries at a reasonable cost to consumers, providing a profit margin for local entrepreneurs, and allowing for portability during travel between villages or refugee camps. The unit includes a solar PV power source, a lead-acid battery for intermediate energy storage, a battery charger equipped with single cell batteries, a charge controller to manage power flow, and a protective suitcase to house the equipment.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A049. doi:10.1115/DETC2014-35463.

Water-lifting technologies in rural areas of the developing world have enormous potential to stimulate agricultural and economic growth. The treadle pump, a human-powered low-cost pump designed for irrigation in developing countries, can help farmers maximize financial return on small plots of land by ending their dependency on rain-fed irrigation systems. The treadle pump uses a suction piston to draw groundwater to the surface by way of a foot-powered treadle attached to each suction piston. Current treadle pump designs lift water from depths up to 7 meters at a flow-rate of 1–5 liters per second. This work seeks to optimize the design of the Dekhi style treadle pump, which has gained significant popularity due to its simplicity. A mathematical model of the working fluid and treadle pump structure has been developed in this study. Deterministic optimization methods are then employed to maximize the flow rate of the groundwater pumped, maximize the lift height, and minimize the volume of material used for manufacturing. Design variables for the optimization included the dimensions of the pump, well depth, and speed of various parts of the system. The solutions are subject to constraints on the geometry of the system, the bending stress in the treadles, and ergonomic factors. Findings indicate that significant technical improvements can be made on the standard Dekhi design, such as increasing the size of the pump cylinders and hose, while maintaining a standard total treadle length. These improvements could allow the Dekhi pump to be implemented in new regions and benefit additional rural farmers in the developing world.

Commentary by Dr. Valentin Fuster

40th Design Automation Conference: Design of Complex Systems

2014;():V02AT03A050. doi:10.1115/DETC2014-34045.

Conceptual design of multidisciplinary systems begins with a description of requirements and proceeds with a solution at a high abstraction level. A systematic and rigorous approach is required to evaluate complex systems and can be achieved by mapping the interactions between disciplines. Research has shown that the use of geometry in the early stages act as enablers for high fidelity analyses as required information can be extracted from the model. In the paper, Knowledge Based Engineering is used with the aim of managing the added complexity as it supports design automation and reuse. This article describes a configuration tool, which allows for quick generation of train geometry using High Level CAD Templates. The tool was created as part of a research project, with the primary objective of the development of a robust framework for a Multidisciplinary Design Optimization process which can support design of high-speed trains.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A051. doi:10.1115/DETC2014-34407.

This paper presents a new methodology for modeling complex engineered systems using complex networks for failure analysis. Many existing network-based modeling approaches for complex engineered systems “abstract away” the functional details to focus on the topological configuration of the system and thus do not provide adequate insight into system behavior. To model failures more adequately, we present two types of network representations of a complex engineered system: a uni-partite architectural network and a weighted bi-partite behavioral network. Whereas the architectural network describes physical inter-connectivity, the behavioral network represents the interaction between functions and variables in mathematical models of the system and its constituent components. The levels of abstraction for nodes in both network types affords the evaluation of failures involving morphology or behavior, respectively. The approach is shown with respect to a drivetrain model. Architectural and behavioral networks are compared with respect to the types of faults that can be described. We conclude with considerations that should be employed when modeling complex engineered systems as networks for the purpose of failure analysis.

Commentary by Dr. Valentin Fuster
2014;():V02AT03A052. doi:10.1115/DETC2014-34503.

Design decision-making involves trade-offs between many design variables and attributes, which can be difficult to model and capture in complex engineered systems. To choose the best design, the decision-maker is often required to analyze many different combinations of these variables and attributes and process the information internally. Trade Space Exploration (TSE) tools, including interactive and multi-dimensional data visualization, can be used to aid in this process and provide designers with a means to make better decisions, particularly during the design of complex engineered systems. In this paper, we investigate the use of TSE tools to support decision-makers using a Value-Driven Design (VDD) approach for complex engineered systems. A VDD approach necessitates a rethinking of trade space exploration. In this paper, we investigate the different uses of trade space exploration in a VDD context. We map a traditional TSE process into a value-based trade environment to provide greater decision support to a design team during complex systems design. The research leverages existing TSE paradigms and multi-dimensional data visualization tools to identify optimal designs using a value function for a system. The feasibility of using these TSE tools to help formulate value functions is also explored. A satellite design example is used to demonstrate the differences between a VDD approach to design complex engineered systems and a multi-objective approach to capture the Pareto frontier. Ongoing and future work is also discussed.

Topics: Design
Commentary by Dr. Valentin Fuster
2014;():V02AT03A053. doi:10.1115/DETC2014-35259.

In engineering design, the volume and weight of a number of systems consisting of valves and plumbing lines often need to be minimized. In current practice, this is facilitated under empirical experience with trial and error, which is time-consuming and may not yield the optimal result. This problem is intrinsically difficult due to the challenge in the formulation of optimization problem that has to be computationally tractable. In this research, we choose a sequential approach towards the design optimization, i.e., first optimizing the placement of valves under prescribed constraints to minimize the volume occupied, and then identifying the shortest paths of plumbing lines to connect the valves. In the first part, the constraints are described by analytical expressions, and two approaches of valve placement optimization are reported, i.e., a two-phase method and a simulated annealing-based method. In the second part, a three-dimensional routing algorithm is explored to connect the valves. Our case study indicates that the design can indeed be automated and design optimization can be achieved under reasonable computational cost. The outcome of this research can benefit both existing manufacturing practice and future additive manufacturing.

Topics: Valves , Plumbing
Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In