0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2018;():V02AT00A001. doi:10.1115/DETC2018-NS2A.
FREE TO VIEW

This online compilation of papers from the ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2018) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference by an author of the paper, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Active System Design

2018;():V02AT03A001. doi:10.1115/DETC2018-85305.

This research presents a convergence analysis for an iterative framework for optimizing plant and controller parameters for active systems. The optimization strategy fuses expensive yet valuable experiments with less accurate yet cheaper simulations. The numerical model is improved at each iteration through a cumulative correction law, using an optimally designed set of experiments. The iterative framework reduces the feasible design space between iterations, ultimately yielding convergence to a small design space that contains the optimum. This paper presents the derivation of an asymptotic upper bound on the difference between the corrected numerical model and true system response. Furthermore, convergence of the numerical model to the true system response and convergence of the design space are demonstrated on an airborne wind energy (AWE) application.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A002. doi:10.1115/DETC2018-85855.

Optimization of dynamic engineering systems generally requires problem formulations that account for the coupling between embodiment design and control system design simultaneously. Such formulations are commonly known as combined optimal design and control (co-design) problems, and their application to deterministic systems is well-established in the literature through a variety of methods. However, an issue that has not been addressed in the co-design literature is the impact of the inherent uncertainties within a dynamic system on its integrated design solution. Accounting for these uncertainties transforms the standard, deterministic co-design problem into a stochastic one, thus requiring appropriate stochastic optimization approaches for its solution. This paper serves as the starting point for research on stochastic co-design problems by proposing and solving a novel problem formulation based on robust design optimization (RDO) principles. Specifically, a co-design method known as multidisciplinary dynamic system design optimization (MDSDO) is used as the basis for a RDO problem formulation and implementation. The robust objective and inequality constraints are computed per usual as functions of their first-order-approximated means and variances, whereas analysis-based equality constraints are evaluated deterministically at the means of the random decision variables. The proposed stochastic co-design problem formulation is then implemented for two case studies, with the results indicating a significant impact of the robust approach on the integrated design solutions and performance measures.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A003. doi:10.1115/DETC2018-85935.

Conventional sequential methods are not bound to yield optimal solutions for design of physical systems and their corresponding control systems. However, by managing the interactions, combined physical and control system design (co-design) can produce superior optimal results. Existing Co-design methods are practical for moderate-scale systems; whereas, they can be impractical or impossible to use when applied to large-scale systems and consequently may limit our determination of an optimal solution. This work addresses this issue by developing a novel decomposition-based version of a co-design algorithm to optimize such large-scale dynamic systems. The new formulation implements a decomposition-based optimization strategy known as Analytical Target Cascading (ATC) to a co-design method known as Multidisciplinary Dynamic System Design Optimization (MDSDO) of a large-scale dynamic system. In addition, a new consistency measure was also established to manage time-dependent linking variables. Results substantiate the ability of the new formulation in identifying the optimal dynamic system solution.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A004. doi:10.1115/DETC2018-86098.

A methodology for the design and control of a variable twist wind turbine blade is presented. The blade is, modular, flexible, and additively manufactured (AM). The AM capabilities have the potential to create a flexible blade with a low torsional-to-longitudinal-stiffness ratio. This enables new design and control capabilities that could be applied to the twist angle distribution. The variable twist distribution can increase the aerodynamic efficiency during Region 2 operation. The suggested blade design includes a rigid spar and flexible AM segments that form the surrounding shells. The stiffness of each segment and the actuator placement define the twist distribution. These values are used to find the optimum free shape for the blade. Given the optimum twist distributions, actuator placement, and free shape, the required amount of actuation could be determined. The proposed design process first determines the twist distribution that maximizes the aerodynamic efficiency in Region 2. A mechanical design algorithm subsequently locates a series of actuators and defines the stiffness ratio between the blade segments. The free shape twist distribution is selected in the next step. It is chosen to minimize the amount of actuation energy required to shape the twist distribution as it changes with Region 2 wind speed. Wind profiles of 20 different sites, gathered over a three-year period, are used to get the free shape. A control framework is then developed to set the twist distribution in relation to wind speed. A case study is performed to demonstrate the suggested procedure. The aerodynamic results show up to 3.8 and 3.3% increase in the efficiency at cut-in and rated speeds, respectively. The cumulative produced energy within three years, improved by up to 1.7%. The mechanical design suggests that the required twist distribution could be achieved by five actuators. Finally, the optimum free shape is selected based on the simulations for the studied sites.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A005. doi:10.1115/DETC2018-86148.

High-performance cooling is often necessary for thermal management of high power density systems. Both human intuition and vast experience may not be adequate to identify optimal thermal management designs as systems increase in size and complexity. This paper presents a design framework supporting comprehensive exploration of a class of single phase fluid-based cooling architectures. The candidate cooling system architectures are represented using labeled rooted tree graphs. Dynamic models are automatically generated from these trees using a graph-based thermal modeling framework. Optimal performance is determined by solving an appropriate fluid flow control problem, handling temperature constraints in the presence of exogenous heat loads. Rigorous case studies are performed in simulation, with components having variable sets of heat loads and temperature constraints. Results include optimization of thermal endurance for an enumerated set of 4,051 architectures. In addition, cooling system architectures capable of steady-state operation under a given loading are identified.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A006. doi:10.1115/DETC2018-86213.

Here we describe a problem class with combined architecture, plant, and control design for dynamic engineering systems. The design problem class is characterized by architectures comprised of linear physical elements and nested co-design optimization problems employing linear-quadratic dynamic optimization. The select problem class leverages a number of existing theory and tools and is particularly attractive due to the symbiosis between labeled graph representations of architectures, dynamic models constructed from linear physical elements, linear-quadratic dynamic optimization, and the nested co-design solution strategy. A vehicle suspension case study is investigated and a specifically constructed architecture, plant, and control design problem is described. The result was the automated generation and co-design problem evaluation of 4,374 unique suspension architectures. The results demonstrate that changes to the vehicle suspension architecture can result in improved performance, but at the cost of increased mechanical complexity. Furthermore, the case study highlights a number of challenges associated with finding solutions to the considered class of design problems.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Artificial Intelligence and Computational Synthesis

2018;():V02AT03A007. doi:10.1115/DETC2018-85339.

Real-world designs usually consist of parts with hierarchical dependencies, i.e., the geometry of one component (a child shape) is dependent on another (a parent shape). We propose a method for synthesizing this type of design. It decomposes the problem of synthesizing the whole design into synthesizing each component separately but keeping the inter-component dependencies satisfied. This method constructs a two-level generative adversarial network to train two generative models for parent and child shapes, respectively. We then use the trained generative models to synthesize or explore parent and child shapes separately via a parent latent representation and infinite child latent representations, each conditioned on a parent shape. We evaluate and discuss the disentanglement and consistency of latent representations obtained by this method. We show that shapes change consistently along any direction in the latent space. This property is desirable for design exploration over the latent space.

Topics: Design , Geometry , Shapes , Trains
Commentary by Dr. Valentin Fuster
2018;():V02AT03A008. doi:10.1115/DETC2018-85506.

Recent advances in deep learning enable machines to learn existing designs by themselves and to create new designs. Generative adversarial networks (GANs) are widely used to generate new images and data by unsupervised learning. Certain limitations exist in applying GANs directly to product designs. It requires a large amount of data, produces uneven output quality, and does not guarantee engineering performance. To solve these problems, this paper proposes a design automation process by combining GANs and topology optimization. The suggested process has been applied to the wheel design of automobiles and has shown that an aesthetically superior and technically meaningful design can be automatically generated without human interventions.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A009. doi:10.1115/DETC2018-85529.

The presented research demonstrates the synthesis of two-dimensional kinematic mechanisms using feature-based reinforcement learning. As a running example the classic challenge of designing a straight-line mechanism is adopted: a mechanism capable of tracing a straight line as part of its trajectory. This paper presents a basic framework, consisting of elements such as mechanism representations, kinematic simulations and learning algorithms, as well as some of the resulting mechanisms and a comparison to prior art. Series of successful mechanisms have been synthesized for path generation of a straight line and figure-eight.

Topics: Kinematics
Commentary by Dr. Valentin Fuster
2018;():V02AT03A010. doi:10.1115/DETC2018-85648.

Early stages of the engineering design process are vital to shaping the final design; each subsequent step builds from the initial concept. Innovation-driven engineering problems require designers to focus heavily on early-stage design generation, with constant application and evaluation of design changes. Strategies to reduce the amount of time and effort designers spend in this phase could improve the efficiency of the design process as a whole. This paper seeks to create and demonstrate a two-tiered design grammar that encodes heuristic strategies to aid in the generation of early solution concepts. Specifically, this two-tiered grammar mimics the combination of heuristic-based strategic actions and parametric modifications employed by human designers. Rules in the higher-tier are abstract and potentially applicable to multiple design problems across a number of fields. These abstract rules are translated into a series of lower-tier rule applications in a spatial design grammar, which are inherently domain-specific. This grammar is implemented within the HSAT agent-based algorithm. Agents iteratively select actions from either the higher-tier or lower-tier. This algorithm is applied to the design of wave energy converters, devices which use the motion of ocean waves to generate electrical power. Comparisons are made between designs generated using only lower-tier rules and those generated using only higher-tier rules.

Topics: Design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A011. doi:10.1115/DETC2018-85654.

In this article, an active learning strategy is introduced for reducing evaluation cost associated with system architecture design problems and is demonstrated using a circuit synthesis problem. While established circuit synthesis methods, such as efficient enumeration strategies and genetic algorithms (GAs), are available, evaluation of candidate architectures often requires computationally-expensive simulations, limiting the scale of solvable problems. Strategies are needed to explore architecture design spaces more efficiently, reducing the number of evaluations required to obtain good solutions. Active learning is a semi-supervised machine learning technique that constructs a predictive model. Here we use active learning to interactively query architecture data as a strategy to choose which candidate architectures to evaluate in a way that accelerates effective design search. Active learning is used to iteratively improve predictive model accuracy with strategically-selected training samples. The predictive model used here is an ensemble method, known as random forest. Several query strategies are compared. A circuit synthesis problem is used to test the active learning strategy; two complete data sets for this case study are available, aiding analysis. While active learning has been used for structured outputs, such as sequence labeling task, the interface between active learning and engineering design, particularly circuit synthesis, has not been well studied. The results indicate that active learning is a promising strategy in reducing the evaluation cost for the circuit synthesis problem, and provide insight into possible next steps for this general solution approach.

Topics: Circuits
Commentary by Dr. Valentin Fuster
2018;():V02AT03A012. doi:10.1115/DETC2018-85896.

Soft robots are intrinsically compliant, which makes them suitable for interaction with delicate objects and living beings. The vast design space and the complex dynamic behavior of the elastic body of the robots make designing them by hand challenging, often requiring a large number of iterations. It is thus advantageous to design soft robots using a computational design approach that integrates simulation feedback. Since locomotion is an essential component in many robotic tasks, this paper presents the computational design synthesis of soft, virtual, locomotion robots. Methods used in previous work give little insight into and control over the computational design synthesis process. The generated solutions are also highly irregular and very different to hand-designed solutions. Also, the problem requirements are solely modeled in the objective function. Here, designs are generated using a spatial grammar with a rule set that is deduced from known locomotion principles. Spatial grammars make it possible to define the type of morphologies that are generated. The aim is to generate gaits based on different locomotion principles, e.g. walking, hopping and crawling. By combining a spatial grammar with simulated annealing, the solution space is searched for locomotive designs. The designs are simulated using a mass-spring model with stable self-collision so that all generated designs can be evaluated. The resulting virtual designs exhibit a large variety of expected and unexpected gaits. The grammar is analyzed to understand the generation process and assess the performance. The main contribution of this research is modeling of some of the results in the spatial grammar rather than the objective function. Thus, the process is guided towards a class of designs with extremities for locomotion, without having to define the class explicitly. Further, the simulation approach is new and results in a stable method that accounts for self-collision.

Topics: Robots , Design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A013. doi:10.1115/DETC2018-86112.

Due to their volatile behavior, natural disasters are challenging problems as they often cannot be accurately predicted. An efficient method to gather updated information of the status of a disaster, such as the location of any trapped survivors, is extremely important to properly conduct rescue operations. To accomplish this, an algorithm is presented to control a swarm of UAVs (Unmanned Aerial Vehicles) and optimize the value of the information gathered. For this application, the UAVs are autonomously navigated with a decentralized control method. With sensor technology embedded, this swarm collects information from the environment as it operates. By using the swarm’s location history, areas of the environment that have gone the longest without exploration can be prioritized, ensuring a thorough search. Measures are also developed to prevent redundant or inefficient exploration, which would reduce the value of the gathered information. A case study of a flood scenario is examined and simulated. Through this approach, the value of the proposed swarm algorithm can be tested by tracking the number of survivors found as well as the rate at which they are discovered.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A014. doi:10.1115/DETC2018-86161.

Robotic systems, working together as a team, are becoming valuable players in different real-world applications, from disaster response to warehouse fulfillment services. Centralized solutions to coordinating multi-robot teams often suffer from poor scalability and vulnerability to communication disruptions. This paper develops a decentralized multi-agent task allocation (Dec-MATA) algorithm for multi-robot applications. The task planning problem is posed as a maximum-weighted matching of a bipartite graph, the solution of which using the blossom algorithm allows each robot to autonomously identify the optimal sequence of tasks it should undertake. The graph weights are determined based on a soft clustering process, which also plays a problem decomposition role seeking to reduce the complexity of the individual-agents’ task assignment problems. To evaluate the new Dec-MATA algorithm, a series of case studies (of varying complexity) are performed, with tasks being distributed randomly over an observable 2D environment. A centralized approach, based on a state-of-the-art MILP formulation of the multi-Traveling Salesman problem is used for comparative analysis. While getting within 7–28% of the optimal cost obtained by the centralized algorithm, the Dec-MATA algorithm is found to be 1–3 orders of magnitude faster and minimally sensitive to task-to-robot ratios unlike the centralized algorithm.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A015. doi:10.1115/DETC2018-86333.

In this paper, we present a method that uses a physics-based virtual environment to evaluate the feasibility of neural network-based generated designs. Deep learning models rely on large training data sets that are used for training. These training data sets are typically validated by human designers that have a conceptual understanding of the problem being solved. However, the requirement of human training data severely constrains the size and availability of training data for computer generated models due to the manual process of either creating or labeling such data sets. Furthermore, there may be misclassification errors that result from human labeling. To mitigate these challenges, we present a physics-based simulation environment that helps users discover correlations between the form of a generated design and the physical constraints that relate to its function. We hypothesize that training data that includes machine validated designs from a physics-based virtual environment will increase the probability of generative models creating functionally-feasible design concepts. A case study involving a generative model that is trained on over 70,000 human 2D boat sketches is used to test the hypothesis. Knowledge gained from testing this hypothesis will provide human designers with insights into the importance of training data in the resulting design solutions generated by deep neural networks.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Data-Driven Design

2018;():V02AT03A016. doi:10.1115/DETC2018-85310.

An important task in structural design is to quantify the structural performance of an object under the external forces it may experience during its use. The problem proves to be computationally very challenging as the external forces’ contact locations and magnitudes may exhibit significant variations. We present an efficient analysis approach to determine the most critical force contact location in such problems with force location uncertainty. Given an input 3D model and regions on its boundary where arbitrary normal forces may make contact, our algorithm predicts the worst-case force configuration responsible for creating the highest stress within the object. Our approach uses a computationally tractable experimental design method to select number of sample force locations based on geometry only, without inspecting the stress response that requires computationally expensive finite-element analysis. Then, we construct a simple regression model on these samples and corresponding maximum stresses. Combined with a simple ranking based post-processing step, our method provides a practical solution to worst-case structural analysis problem. The results indicate that our approach achieves significant improvements over the existing work and brute force approaches. We demonstrate that further speedup can be obtained when small amount of an error tolerance in maximum stress is allowed.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A017. doi:10.1115/DETC2018-85591.

This paper presents an explorative-based computational methodology to aid the analogical retrieval process in design-by-analogy practice. The computational methodology, driven by Non-negative Matrix Factorization (NMF), iteratively builds a hierarchical repositories of design solutions within which clusters of design analogies can be explored by designers. In the work, the methodology has been applied on a large repository of mechanical design related patents, processed to contain only component-, behavior-, or material-based content, to demonstrate that unique and valuable attribute-based analogical inspiration can be discovered from different representations of patent data. For explorative purposes, the hierarchical repositories have been visualized with a three-dimensional hierarchical structure and two-dimensional bar graph structure, which can be used interchangeably for retrieving analogies. This paper demonstrates that the explorative-based computational methodology provides designers an enhanced control over design repositories, empowering them to retrieve analogical inspiration for design-by-analogy practice.

Topics: Databases , Patents
Commentary by Dr. Valentin Fuster
2018;():V02AT03A018. doi:10.1115/DETC2018-85599.

Planning and strategizing are essential parts of the design process and are based on the designer’s skill. Further, planning is an abstract skill that can be transferred between similar problems. However, planning and strategy transfer within design have not been effectively modeled within computational agents. This paper presents an approach to represent this strategizing behavior using a probabilistic model. This model is employed to select the operations that computational agents should perform while solving configuration design tasks. This work also demonstrates that this probabilistic model can be used to transfer strategies from human data to computational agents in a way that is general and useful. This study shows a successful transfer of design strategy from human-to-computer agents, opening up the possibility of deriving high-performing behavior from designers and using it to guide computational design agents. Finally, a quintessential behavior of transfer learning is illustrated by agents while transferring design strategies across different problems, improving agent performance significantly. The work presented in this study leverages a computational framework built by embedding cognitive characteristics into agents, which has shown to mimic human problem-solving in configuration design problems.

Topics: Design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A019. doi:10.1115/DETC2018-86084.

Effective short-term load forecasting (STLF) plays an important role in demand-side management and power system operations. In this paper, STLF with three aggregation strategies are developed, which are information aggregation (IA), model aggregation (MA), and hierarchy aggregation (HA). The IA, MA, and HA strategies aggregate inputs, models, and forecasts, respectively, at different stages in the forecasting process. To verify the effectiveness of the three aggregation STLF, a set of 10 models based on 4 machine learning algorithms, i.e., artificial neural network, support vector machine, gradient boosting machine, and random forest, are developed in each aggregation group to predict 1-hour-ahead load. Case studies based on 2-year of university campus data with 13 individual buildings showed that: (a) STLF with three aggregation strategies improves forecasting accuracy, compared with benchmarks without aggregation; (b) STLF-IA consistently presents superior behavior than STLF based on weather data and STLF based on individual load data; (c) MA reduces the occurrence of unsatisfactory single-algorithm STLF models, therefore enhancing the STLF robustness; (d) STLF-HA produces the most accurate forecasts in distinctive load pattern scenarios due to calendar effects.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A020. doi:10.1115/DETC2018-86163.

Bias correction is important for model calibration to obtain unbiased calibration parameter estimates and make accurate prediction. However, calibration often relies on insufficient samples, and so bias correction often mostly depends on extrapolation. For example, bias correction with twelve samples in nine-dimensional box generated by Latin Hypercube Sampling (LHS) has less than 0.1% interpolation domain in the box. Since bias correction is coupled with calibration parameter estimation, calibration with extrapolative bias correction can lead a large error in the calibrated parameters. This paper proposes an idea of calibration with minimum bumpiness correction. The bumpiness of bias correction is a good measure of assessing the potential risk of a large error in the correction. By minimizing bumpiness, the risk of extrapolation can be reduced while the accuracy of parameter estimates can be achieved. It was found that this calibration method gave more accurate results than Bayesian calibration for an analytical example. It was also found that there are common denominators between the proposed method and the Bayesian calibration with bias correction.

Topics: Calibration
Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Decision Making in Engineering Design

2018;():V02AT03A021. doi:10.1115/DETC2018-85460.

Designers make process-level decisions to (i) select designs for performance evaluation, (ii) select information source, and (iii) decide whether to stop design exploration. These decisions are influenced by problem-related factors, such as costs and uncertainty in information sources, and budget constraints for design evaluations. The objective of this paper is to analyze individuals’ strategies for making process-level decisions under the availability of noisy information sources of different cost and uncertainty, and limited budget. Our approach involves a) conducting a behavioral experiment with an engineering optimization task to collect data on subjects’ decision strategies, b) eliciting their decision strategies using a survey, and c) performing a descriptive analysis to compare elicited strategies and observations from the data. We observe that subjects use specific criteria such as fixed values of attributes, highest prediction of performance, highest uncertainty in performance, and attribute thresholds when making decisions of interest. When subjects have higher budget, they are less likely to evaluate points having highest prediction of performance, and more likely to evaluate points having highest uncertainty in performance. Further, subjects conduct expensive evaluations even when their decisions have not sufficiently converged to the region of maximum performance in the design space and improvements from additional cheap evaluations are large. The implications of the results in identifying deviations from optimal strategies and structuring decisions for further model development are discussed.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A022. doi:10.1115/DETC2018-85536.

The development of complex product dynamic simulation models and the integration of design automation systems require knowledge from multiple heterogeneous data sources and tools. Because of the heterogeneity of model data, the integration of tools and data is a time-consuming and error-prone task. The main objective of this study is to provide a unified model of dynamic simulation for engineering design, which serves as a knowledge base to support the development of a dynamic simulation model. The integration of knowledge is realized through (i) definition of the structure and interface during the design phase of the dynamic simulation model, and (ii) definition of a model-driven integrated environment configuration process during the runtime phase. In order to achieve interoperability among the different simulation models in a collaborative design environment, we build a “Demand-Resources-Service-Knowledge-Process (DKRSP)” ontology that formally represents the semantics of dynamic simulation models. Based on the ontology, a knowledge base is created for the management of dynamic simulation knowledge. The efficacy of the ontology and the knowledge base are demonstrated using a transmission design example.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A023. doi:10.1115/DETC2018-86156.

Though little research has been done in the field of over-design as a product development strategy, an over-design approach can help products avoid the issue of premature obsolescence. This paper compares over-design to redesign as approaches to address the emergence of future requirements. Net present value (NPV) analyses of several real world applications are examined from the perspective of manufacturers and customers. This analysis is used to determine the conditions under which an over-design approach provides a greater benefit than a redesign approach. Over-design is found to have a higher net present value than redesign when future requirements occur soon after the initial release, discount rates are low, initial research and development cost or price is high, and when the incremental costs of the future requirements are low.

Topics: Design
Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Design and Optimization of Sustainable Energy Systems

2018;():V02AT03A024. doi:10.1115/DETC2018-85127.

In this study, we characterize machine learning regression techniques for their ability to predict storm-related transmission outages based on local weather and transmission outage data. To test the machine learning regression techniques, we use data from the central Oregon Coast — which is particularly vulnerable to storm-related transmission outages — for a case study. We test multiple regression methods (linear and polynomial models with varying degrees) as well as support vector regression methods using linear, polynomial, and Radial-Basis-Function kernels. Results indicate relatively poor prediction capability by these methods, but this is attributed to the lack of outage data (characteristic of low-probability, high-risk events), and a cluster of data points representing momentary (<0 seconds) outages. More long-term outage data could lead to better characterization of the models, enabling others to quantify the frequency of storm-related transmission outages based on local weather data. Only by understanding the frequency of these occurrences can a cost-benefit analysis for potential transmission upgrades or generation sources be completed.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A025. doi:10.1115/DETC2018-85256.

This paper presents a simulation-based analysis of a model of a mid-sized vehicle while exploring powertrains of interest. In addition to a baseline conventional vehicle (CV), the explored powertrain architectures include: hybrid electric vehicle (HEV), plugin hybrid electric vehicle (PHEV) and batterW2Wy-only electric vehicle (BEV). The modeling also considers several different all electric driving range (AER) of the PHEVs and BEVs. Fuel economy/energy-efficiency assessment is conducted by with open source software (FASTSim), and by analyzing a large set of real-world driving trips from California Household Travel Survey (CHTS-2013), which contains a record of more than 65 thousand trips with one second interval recording of the vehicle seed. Gas and/or electric energy usage from the analyzed trips are then used to generate greenhouse gas (GHG) statistical distributions (in units of gm-CO2/mile) for a modelled vehicle powertrain. Gas and/or electric energy usage are also utilized in the calculation of the running cost, and ultimately the net average cost (in units of $/mile) for the modelled powertrains. Pareto trade-off analysis (Cost vs GHG) is then conducted for four sub-population segments of CHTS vehicle samples in a baseline scenario as well as four future-looking scenarios where carbon intensity in electric power generation gets lower, gas gets more expensive and batteries get less expensive. While noting limitations of the conducted analysis, key findings suggest that: i) mix of PHEVs and BEVs with various AER that is properly matched to driver needs would be better than one single powertrain design for all drivers, and ii) electrified powertrains do not become cost-competitive in their own right (without incentives or subsidies) until some of the future battery technology goals are attained.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A026. doi:10.1115/DETC2018-85683.

Recent years have witnessed a tremendous growth of interest in multi-robot system which can execute more complex tasks compared to single robot. To improve the operational life of multi-robot system and address challenges in long-duration mission, the solar-powered multi-robot system has been demonstrated to be an effective solution. To ensure efficient operation of solar-powered multi-robot system, we propose a multi-criteria mixed integer programming model for multi-robot mission planning to minimize three objectives including traveling distance, traveling time, and net energy consumption. Our proposed model is an extension of multiple vehicle routing problem considering time window, flexible speed, and energy sharing where a set of flexible speeds are proposed to explore the influence of robot’s velocity on energy consumption and solar energy harvesting. Three sets of case studies are designed to investigate the tradeoffs among the three objectives. The results demonstrate that heterogeneous multi-robot system: 1) can more efficiently utilize solar energy and 2) need a multi-criteria model to balance the three objectives.

Topics: Robots , Solar power
Commentary by Dr. Valentin Fuster
2018;():V02AT03A027. doi:10.1115/DETC2018-86031.

Electricity generation is a major source of air pollution, contributing to nearly one-third of the total greenhouse gas emissions in the United States. As with most goods, production must keep up with the projected consumer demand, and the industry is subject to government regulations at the federal, state, and local levels. This study models the New Jersey electric grid as a market system, using agent-based modeling to represent individual consumers and power companies making utility-maximizing decisions. Each consumer agent is prescribed a unique value function that includes factors such as income, energy intensity, and environmental sensitivity, and they are able to make decisions about how much energy they use and whether they opt into a renewable energy program. Power producers are modeled to keep up with demand and minimize their cost per unit of electricity produced, and they include options to prefer either on-demand or renewable energy sources. Using this model, different scenarios are examined with respect to producer strategy and government policy. The results provide a proof-of-concept for the modeling approach, and they reveal interesting trends about how the markets are expected to react under different scenarios.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A028. doi:10.1115/DETC2018-86094.

This work leverages the current state of the art in reinforcement learning for continuous control, the Deep Deterministic Policy Gradient (DDPG) algorithm, towards the optimal 24-hour dispatch of shared energy assets within building clusters. The modeled DDPG agent interacts with a battery environment, designed to emulate a shared battery system. The aim here is to not only learn an efficient charged/discharged policy, but to also address the continuous domain question of how much energy should be charged or discharged. Experimentally, we examine the impact of the learned dispatch strategy towards minimizing demand peaks within the building cluster. Our results show that across the variety of building cluster combinations studied, the algorithm is able to learn and exploit energy arbitrage, tailoring it into battery dispatch strategies for peak demand shifting.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Design for Additive Manufacturing

2018;():V02AT03A029. doi:10.1115/DETC2018-85270.

Advances in additive manufacturing processes have made it possible to build mechanical metamaterials with bulk properties that exceed those of naturally occurring materials. One class of these metamaterials is structural lattices that can achieve high stiffness to weight ratios. Recent work on geometric projection approaches has introduced the possibility of optimizing these architected lattice designs in a drastically reduced parameter space. The reduced number of design variables enables application of a new class of methods for exploring the design space. This work investigates the use of Bayesian optimization, a technique for global optimization of expensive non-convex objective functions through surrogate modeling. We utilize formulations for implementing probabilistic constraints in Bayesian optimization to aid convergence in this highly constrained engineering problem, and demonstrate results with a variety of stiff lightweight lattice designs.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A030. doi:10.1115/DETC2018-85272.

One of the challenges in designing for additive manufacturing (AM) is accounting for the differences between as-designed and as-built geometries and material properties. From a designer’s perspective, these differences can lead to degradation of part performance, which is especially difficult to accommodate in small-lot or one-of-a-kind production. In this context, each part is unique, and therefore, extensive iteration is costly. Designers need a means of exploring the design space while simultaneously considering the reliability of additively manufacturing particular candidate designs. In this work, a design exploration approach, based on Bayesian network classifiers (BNC), is extended to incorporate manufacturability explicitly into the design exploration process.

The example application is the design of negative stiffness (NS) metamaterials, in which small volume fractions of negative stiffness (NS) inclusions are embedded within a host material. The resulting metamaterial or composite exhibits macroscopic mechanical stiffness and loss properties that exceed those of the base matrix material. The inclusions are fabricated with microstereolithography with features on the scale of tens of microns, but variability is observed in material properties and dimensions from specimen to specimen.

In this work, the manufacturing variability of critical features of a NS inclusion fabricated via microstereolithography are characterized experimentally and modelled mathematically. Specifically, the variation in the geometry of the NS inclusions and the Young’s modulus of the photopolymer are measured and modeled by both nonparametric and parametric joint probability distributions. Finally, the quantified manufacturing variability is incorporated into the BNC approach as a manufacturability classifier to identify candidate designs that achieve performance targets reliably, even when manufacturing variability is taken into account.

Topics: Design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A031. doi:10.1115/DETC2018-85391.

The topology optimization (TO) of structures to be produced using additive manufacturing (AM) is explored using a data-driven constraint function that predicts the minimum producible size of small features in different shapes and orientations. This shape- and orientation-dependent manufacturing constraint, derived from experimental data, is implemented within a TO framework using a modified version of the Moving Morphable Components (MMC) approach. Because the analytic constraint function is fully differentiable, gradient-based optimization can be used. The MMC approach is extended in this work to include a “bootstrapping” step, which provides initial component layouts to the MMC algorithm based on intermediate Solid Isotropic Material with Penalization (SIMP) topology optimization results. This “bootstrapping” approach improves convergence compared to reference MMC implementations. Results from two compliance design optimization example problems demonstrate the successful integration of the manufacturability constraint in the MMC approach, and the optimal designs produced show minor changes in topology and shape compared to designs produced using fixed-radius filters in the traditional SIMP approach. The use of this data-driven manufacturability constraint makes it possible to take better advantage of the achievable complexity in additive manufacturing processes, while resulting in typical penalties to the design objective function of around only 2% when compared to the unconstrained case.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A032. doi:10.1115/DETC2018-85618.

Functionally graded materials (FGMs) are heterogeneous materials engineered to vary material composition across the volume of an object. Controlled mixture and deposition of each material through a manufactured part can ultimately allow for specific material properties defined in different regions of a structure. While such structures are traditionally difficult to manufacture, additive manufacturing processes, such as directed energy deposition, material jetting, and material extrusion, have recently increased the manufacturability of FGMs. However, the existing digital design workflow lacks the ability to accurately mix and assign multiple materials to a given volume, especially in the case of toolpath dependent deposition processes like filament-based material extrusion. In this paper, we will address this limitation by using a voxel-based representation approach, where material values are assigned across a pixel grid on each geometry slice before converting to toolpath information for manufacturing. This approach allows for creation of structures with increased material complexity decoupled from the external geometry of the design space, an approach not yet demonstrated in the existing literature. By using a dual-feed, single melt-pool extrusion nozzle system, this research demonstrates the ability to accurately recreate mathematically derived gradients while establishing a digital workflow capable of integrating with the material extrusion AM process.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A033. doi:10.1115/DETC2018-85639.

The efficient production planning of Additively Manufactured (AM) parts is a key point for industry-scale adoption of AM. This study develops an AM-based production plan for the case of manufacturing a significant number of parts with different shapes and sizes by multiple machines with the ultimate purpose of reducing the cycle time. The proposed AM-based production planning includes three main steps: (1) determination of build orientation; (2) 2D packing of parts within the limited workspace of AM machines; and (3) scheduling parts on multiple AM machines. For making decision about build orientation, two main policies are considered: (1) laying policy in which the focus is on reducing the height of parts; and (2) standing policy which aims at minimizing the projection area on the tray to reduce the number of jobs. A heuristic algorithm is suggested to solve 2D packing and scheduling problems. A numerical example is conducted to identify which policy is more preferred in terms of cycle time. As a result, the standing policy is more preferred than the laying policy as the number of parts increases. In the case of testing 3,000 parts, the cycle time of standing policy is about 6% shorter than laying policy.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A034. doi:10.1115/DETC2018-85645.

Prosthetic limbs and assistive devices require customization to effectively meet the needs of users. Despite the expense and hassle involved in procuring a prosthetic, 56% of people with limb loss end up abandoning their devices [1]. Acceptance of these devices is contingent on the comfort of the user, which depends heavily on the size, weight, and overall aesthetic of the device. As seen in numerous applications, parametric modeling can be utilized to produce medical devices that are specific to the patient’s needs. However, current 3D printed upper limb prosthetics use uniform scaling to fit the prostheses to different users.

In this paper, we propose a parametric modeling method for designing prosthetic fingers. We show that a prosthetic finger designed using parametric modeling has a range of motion (ROM) (path of the finger tip) that closely aligns with the digit’s natural path. We also show that the ROM produced by a uniformly scaled prosthetic poorly matches the natural ROM of the finger. To test this, finger width and length measurements were collected from 50 adults between the ages of 18–30. It was determined that there is negligible correlation between the length and width of the index (D2) digit among the participants.

Using both the highest and the lowest length to width ratio found among the participants, a prosthetic finger was designed using a parametric model and fabricated using additive manufacturing. The mechanical design of the prosthetic finger utilized a crossed four bar linkage mechanism and its ROM was determined by Freudenstein’s equations. By simulating the different paths of the fingers, we demonstrate that parametrically modeled fingers outperform uniformly scaled fingers at matching a natural digit’s path.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A035. doi:10.1115/DETC2018-85819.

With growing interest in metal additive manufacturing, one area of interest for design for additive manufacturing is the ability to understand how part geometry combined with the manufacturing process will affect part performance. In addition, many researchers are pursuing design for additive manufacturing with the goal of generating designs for stiff and lightweight applications as opposed to tailored compliance. A compliant mechanism has unique advantages over traditional mechanisms but previously, complex 3D compliant mechanisms have been limited by manufacturability. Recent advances in additive manufacturing enable fabrication of more complex and 3D metal compliant mechanisms, an area of research that is relatively unexplored. In this paper, a design for additive manufacturing workflow is proposed that incorporates feedback to a designer on both the structural performance and manufacturability. Specifically, a cellular contact-aided compliant mechanism for energy absorption is used as a test problem. Insights gained from finite element simulations of the energy absorbed as well as the thermal history from an AM build simulation are used to further refine the design. Using the proposed workflow, several trends on the performance and manufacturability of the test problem are determined and used to redesign the compliant unit cell. When compared to a preliminary unit cell design, a redesigned unit cell showed decreased energy absorption capacity of only 7.8% while decreasing thermal distortion by 20%. The workflow presented provides a systematic approach to inform a designer about methods to redesign an AM part.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A036. doi:10.1115/DETC2018-85848.

Design for additive manufacturing (DFAM) provides design freedom for creating complex geometries and guides designers to ensure manufacturability of parts fabricated using additive manufacturing (AM) processes. However, there is a lack of formalized DFAM knowledge that provides information on how to design parts and how to plan AM processes for achieving target goals, e.g., reducing build-time. Therefore, this study presents a DFAM ontology using the web ontology language (OWL) to formalize DFAM knowledge and support queries for retrieving that knowledge. The DFAM ontology has three high level classes to represent design rules specifically: feature, parameter, and AM capability. Furthermore, the manufacturing feature concept is defined to link part design to AM process parameters. Since manufacturing features contain information on feature constraints of AM processes, the DFAM ontology supports manufacturability analysis of design features by reasoning with Semantic Query-enhanced Web Rule Language (SQWRL). The SQWRL rules in this study also help retrieve design recommendations for improving manufacturability. A case study is performed to illustrate usefulness of the DFAM ontology and SQWRL rule application. This study contributes to developing a knowledge base that can be reusable and upgradable and to analyzing manufacturing analysis to provide feedback about part designs to designers.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A037. doi:10.1115/DETC2018-85850.

Total hip arthroplasty (THA) is an increasingly common procedure that replaces all or part of the hip joint. The average age of patients is decreasing, which in turn increases the need for more durable implants. Revisions in hip implants are frequently caused by three primary issues: femoral loading, poor fixation, and stress shielding. First, as the age of hip implant patients decreases, the hip implants are seeing increased loading, beyond what they were traditionally designed for. Second, traditional implants may have roughened surfaces but are not fully porous which would allow bone to grow in and through the implant. Third, traditional implants are too stiff, causing more load to be carried by the implant and shielding the bone from stress. Ultimately this stress shielding leads to bone resorption and implant loosening.

Additive manufacturing (AM) presents a unique opportunity for enhanced performance by allowing for personalized medicine and increased functionality through geometrically complex parts. Much research has been devoted to how AM can be used to improve surgical implants through lattice structures. To date, the authors have found no studies that have performed a complete 3D lattice structure optimization in patient specific anatomy. This paper discusses the general design of an AM hip implant that is personalized for patient specific anatomy and proposes a workflow for optimizing a lattice structure within the implant.

Using this design workflow, several lattice structured AM hip implants of various unit cell types are optimized. A solid hip implant is compared against the optimized hip implants. It appears the AM hip implant with a tetra lattice outperforms the other implant by reducing stiffness and allowing for greater bone ingrowth. Ultimately it was found that AM software still has many limitations associated with attempting complex optimizations with multiple materials in patient specific anatomy. Though software limitations prevented a full 3D optimization in patient specific anatomy, the challenges associated such an approach and limitations of the current software are discussed.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A038. doi:10.1115/DETC2018-85908.

Additive manufacturing (AM) has unique capabilities when compared to traditional manufacturing, such as shape, hierarchical, functional, and material complexity, a fact that has fascinated those in research, industry, and the media for the last decade. Consequently, designers would like to know how they can incorporate AM’s special capabilities into their designs, but are often at a loss as to how to do so. Design for Additive Manufacturing (DfAM) methods are currently in development but the vast majority of existing methods are not tailored to the needs and knowledge of designers in the early stages of the design a process. The authors have previously derived 29 design heuristics for AM. In this paper, the efficacy of these heuristics is tested in the context of a re-design scenario with novice designers. The preliminary results show that the heuristics positively influence the designs generated by the novice designers. Analysis of the use of specific heuristics by the participants and future research to validate the impact of the design heuristics for additive manufacturing with expert designers and in original design scenarios is planned.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A039. doi:10.1115/DETC2018-85921.

Due to the limitations of currently available artificial spinal discs stemming from anatomical unfit and unnatural motion, patient-specific elastomeric artificial spinal discs are conceived as a promising solution to improve clinical results. Multimaterial Additive Manufacturing (AM) has the potential to facilitate the production of an elastomeric composite artificial disc with complex personalized geometry and controlled material distribution. Motivated by the potential combined advantages of personalized artificial spinal discs and multi-material AM, a biomimetic multi-material elastomeric artificial disc design with several matrix sections and a crisscross fiber network is proposed in this study. To determine the optimized material distribution of each component for natural motion restoration, a computational method is proposed. The method consists of automatic generation of a patient-specific disc Finite Element (FE) model followed by material property optimization. Biologically inspired heuristics are incorporated into the optimization process to reduce the number of design variables in order to facilitate convergence. The general applicability of the method is verified by designing both lumbar and cervical artificial discs with varying geometries, natural rotational motion ranges, and rotational stiffness requirements. The results show that the proposed method is capable of producing a patient-specific artificial spinal disc design with customized geometry and optimized material distribution to achieve natural spinal rotational motions. Future work focuses on extending the method to also include implant strength and shock absorption behavior into the optimization as well as identifying a suitable AM process for manufacturing.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A040. doi:10.1115/DETC2018-85928.

Porous materials / structures have wide applications in industry, since the sizes, shapes and positions of their pores can be adjusted on various demands. However, the precise control and performance oriented design of porous structures are still urgent and challenging, especially when the manufacturing technology is well developed due to 3D printing. In this study, the control and design of anisotropic porous structures are studied with more degrees of freedom than isotropic structures, and can achieve more complex mechanical goals. The proposed approach introduces Super Formula to represent the structural cells, maps the design problem to an optimal problem using PGD, and solves the optimal problem using MMA to obtain the structure with desired performance. The proposed approach is also tested on the performance of the expansion of design space, the capture of the physical orientation and so on.

Topics: Anisotropy , Design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A041. doi:10.1115/DETC2018-85953.

Additive Manufacturing (AM) is a novel process that enables the manufacturing of complex geometries through layer-by-layer deposition of material. AM processes provide a stark contrast to traditional, subtractive manufacturing processes, which has resulted in the emergence of design for additive manufacturing (DfAM) to capitalize on AM’s capabilities. In order to support the increasing use of AM in engineering, it is important to shift from the traditional design for manufacturing and assembly mindset, towards integrating DfAM. To facilitate this, DfAM must be included in the engineering design curriculum in a manner that has the highest impact. While previous research has systematically organized DfAM concepts into process capability-based (opportunistic) and limitation-based (restrictive) considerations, limited research has been conducted on the impact of teaching DfAM on the student’s design process. This study investigates this interaction by comparing two DfAM educational interventions conducted at different points in the academic semester. The two versions are compared by evaluating the students’ perceived utility, change in self-efficacy, and the use of DfAM concepts in design. The results show that introducing DfAM early in the semester when students have little previous experience in AM resulted in the largest gains in students perceiving utility in learning about DfAM concepts and DfAM self-efficacy gains. Further, we see that this increase relates to greater application of opportunistic DfAM concepts in student design ideas in a DfAM challenge. However, no difference was seen in the application of restrictive DfAM concepts between the two interventions. These results can be used to guide the design and implementation of DfAM education.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A042. doi:10.1115/DETC2018-86191.

This paper investigates the application of Superformula for structural synthesis. The focus is set on the lightweight design of parts that can be realized using discrete lattice structures. While the design domain will be obtained using the Superformula, a tetrahedral meshing technique will be applied to this domain to generate the topology of the lattice structure. The motivation for this investigation stems from the property of the Superformula to easily represent complex biological shapes, which opens a possibility to directly link a structural synthesis to a biomimetic design. Currently, numerous results are being reported showing the development of a wide range of design methods and tools that first study and then utilize the solutions and principles from the nature to solve technical problems. However, none of these methods and tools quantitatively utilizes these principles in the form of nature inspired shapes that can be controlled parametrically. The motivation for this work is also in part due to the mathematical formulation of the Superformula as a generalization of a superellipse, which, in contrast to the normal surface modeling offers a very compact and easy way to handle set of rich shape variants with promising applications in structural synthesis. The structural synthesis approach is organized as a volume minimization using Simulated Annealing (SA) to search over the topology and shape of the lattice structure. The fitness of each of candidate solutions generated by SA is determined based on the outcome of lattice member sizing for which an Interior Point based method is applied. The approach is validated with a case study involving inline skate wheel spokes.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A043. doi:10.1115/DETC2018-86238.

The use of additive manufacturing (AM) for fabricating industrial grade components has increased significantly in recent years. Numerous industrial entities are looking to leverage new AM techniques to enable fabrication of components that were typically manufactured previously using conventional manufacturing techniques such as subtractive manufacturing or casting. Therefore, it is becoming increasingly important to be able to rigorously evaluate the technical and economic feasibility of additively manufacturing a component relative to conventional alternatives. In order to support this evaluation, this paper presents a framework that investigates fabrication feasibility for AM from three perspectives: geometric evaluation, build orientation/support generation, and resources necessary (i.e., cost and time). The core functionality of the framework is enabled on voxelized model representation, discrete and binary formats of 3D continuous objects. AM fabrication feasibility analysis is applied to 34 various parts representing a wide range of manifolds and valves manufactured using conventional manufacturing techniques, components commonly found in the aerospace industry. Results obtained illustrate the capability and generalizability of the framework to analyze intricate geometries and provide a primary assessment for the feasibility of the AM process.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A044. doi:10.1115/DETC2018-86266.

This paper extends a recently developed adjoint framework for an efficient overhang filtering to projection-based methods for overhang constraints. The developed approach offers a fast and efficient computational methodology and enables a minimum allowable self-supporting angle and a minimum allowable feature size within the formulation of the optimization problem, and designs components that can be manufactured without using support structures. The adjoint-based sensitivity formulation are derived to eliminate the computational intensities associated with the direct differentiation in the formulation of overhang constraints, which become prohibitive in large scale 2D and 3D problems. The developed formulation is tested on structural problems and numerical examples are provided to present efficiency of the proposed methodology.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A045. doi:10.1115/DETC2018-86344.

Geometric tolerances for new products are sometimes assigned without specific knowledge of the cost or feasibility of manufacturing them to the assigned tolerances, which can significantly drive up production costs and lead to delays and design revisions. We present an interactive tool that quickly estimates the manufacturability of assigned tolerances for additive manufacturing and a compact visualization to present this information to the designer. The designer can use the system to explore feasible build orientations and then adjust specified tolerance limits if all tolerances are not simultaneously achievable at a single orientation. After the designer is satisfied that the range of feasible orientations has been fully explored, a physical programming approach is used to identify a single orientation to best satisfy the designer’s preferences. The calculation and visualization of the results is done in real-time, enabling quick iteration. A test case is presented to illustrate the use of the tool.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Design for Market Systems

2018;():V02AT03A046. doi:10.1115/DETC2018-85657.

Experimentation and validation tests conducted by or for technology startups are often costly, time-consuming, and, above all, not well organized. A review of the literature shows that existing tools and methods are either oriented towards lean iterative tests or strongly focused on technology improvement. There is therefore a gap to bridge by providing tangible decision-making supports involving both market and technology aspects. This paper introduces a new quantitative methodology called RITHM (Roadmapping Investments in TecHnology and Marketing), which is a structured process that enables startups to systematically experiment and reach, with relatively small effort, adequate maturity level for the most promising markets. The objective of this methodology is to model and optimize tests in the front end of innovation to progressively reduce uncertainties and risks before the launch of the product. A case study of a shape shifting technology is presented in this paper to illustrate the application of RITHM.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A047. doi:10.1115/DETC2018-85992.

Understanding and integrating a user’s decision-making process into design and implementation strategies for clean energy technologies may lead to higher product adoption rates and ultimately increased impacts, particularly for those products that require a change in habit or behavior. To evaluate the key attributes that formulate a user’s decision-making behavior to adopt a new clean technology, this study presents the application of the Theory of Planned Behavior, a method to quantify the main psychological attributes that make up a user’s intention for health and environmental behaviors. This theory was applied to the study of biomass cookstoves. Surveys in two rural communities in Honduras and Uganda were conducted to evaluate households’ intentions regarding adoption of improved biomass cookstoves. Multiple ordered logistic regressions method presented the most statistically significant results for the collected data of the case studies. Baseline results showed users had a significant positive mindset to replace their traditional practices. In Honduras, users valued smoke reduction more than other attributes and in average the odds for a household with slightly higher attitude toward reducing smoke emissions were 2.1 times greater to use a clean technology than someone who did not value smoke reduction as much. In Uganda, less firewood consumption was the most important attribute and on average the odds for households were 1.9 times more to adopt a clean technology to save fuel than someone who did not value fuelwood saving as much. After two months of using a cookstove, in Honduras, households’ perception of the feasibility of replacing traditional stoves, or perceived behavioral control, slightly decreased suggesting that as users became more familiar with the clean technology they perceived less hindrances to change their traditional habits. Information such as this could be utilized for design of the technologies that require user behavior changes to be effective.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A048. doi:10.1115/DETC2018-86018.

In a multi-reservoir system, ensuring adequate water availability across reservoirs while managing conflicting goals under uncertainties are critical to making the social-ecological system sustainable. The priorities of multiple user-groups and availability of the water resource may vary with time, weather and other factors. Uncertainties such as variation in precipitation bring more complexity, which intensifies the discrepancies between water supply and water demand for each user-group. To reduce such discrepancies, we should satisfice conflicting goals, considering typical uncertainties.

We observed that models are incomplete and inaccurate, which challenge the use of the single optimal solution to be robust to uncertainties. So, we explore satisficing solutions that are relatively insensitive to uncertainties, by incorporating different design preferences, identifying sensitive segments and improving the design accordingly. This work is an example of exploring the solution space to enhance sustainability in multidisciplinary systems, when goals conflict, preferences are evolving, and uncertainties add complexity.

Topics: Dams , Water , Uncertainty
Commentary by Dr. Valentin Fuster
2018;():V02AT03A049. doi:10.1115/DETC2018-86170.

Though academic research for identifying and considering the social impact of products is emerging, the actual use of these processes in industry is undeclared in the literature. The gap between academic research and the industry adoption of these theories and methodologies can have real consequences. This paper explores current practices in industry that design engineers use to consider the social impact of products during the customer use stage. 30 people from nineteen different companies were interviewed to discover what disconnects exist between academia and industry when considering a product’s social impact. Although social impact assessments (SIA) and social life cycle assessments (SLCA) are two of the most common evaluative processes discussed in the literature, not a single company interviewed used either of these processes despite affirming that they do consider social impact in product design. Predictive processes were discussed by the respondents that tended to be developed within the company and often related to government regulations.

Topics: Product design
Commentary by Dr. Valentin Fuster
2018;():V02AT03A050. doi:10.1115/DETC2018-86245.

Customer preferences are found to evolve over time and correlate with geographical locations. Studying spatiotemporal heterogeneity of customer preferences is crucial to engineering design as it provides a dynamic perspective for a thorough understanding of preference trend. However, existing analytical models for demand modeling do not take the spatiotemporal heterogeneity of customer preferences into consideration. To fill this research gap, a spatial panel modeling approach is developed in this study to investigate the spatiotemporal heterogeneity of customer preferences by introducing engineering attributes explicitly as model inputs in support of demand forecasting in engineering design. In addition, a step-by-step procedure is proposed to aid the implementation of the approach. To demonstrate this approach, a case study is conducted on small SUV in China’s automotive market. Our results show that small SUVs with lower prices, higher power, and lower fuel consumption tend to have a positive impact on their sales in each region. In understanding the spatial patterns of China’s small SUV market, we found that each province has a unique spatial specific effect influencing the small SUV demand, which suggests that even if changing the design attributes of a product to the same extent, the resulting effects on product demand might be different across different regions. In understanding the underlying social-economic factors that drive the regional differences, it is found that Gross Domestic Product (GDP) per capita, length of paved roads per capita and household consumption expenditure have significantly positive influence on small SUV sales. These results demonstrate the potential capability of our approach in handling spatial variations of customers for product design and marketing strategy development. The main contribution of this research is the development of an analytical approach integrating spatiotemporal heterogeneity into demand modeling to support engineering design.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Design for Resilience and Failure Recovery

2018;():V02AT03A051. doi:10.1115/DETC2018-85207.

Structural control systems, including passive, semi-active and active damping systems, are used to increase structural resilience to multi-hazard excitations. While semi-active and active damping systems have been investigated for the mitigation of multi-hazard excitations, their requirement for real-time controllers and power availability limit their usefulness. This work proposes the use of a newly developed passive variable friction device for the mitigation of multi-hazard events. This passive variable friction device, when installed in a structure, is capable of mitigating different hazards from wind and ground motions. In wind events, the device ensures serviceability, while during earthquake events, the device reduces the building’s inter-story drift to maintain strength-based motion requirements. Results show that the passive variable friction device performs better than a traditional friction damper during a seismic event while not compromising any performance during wind events.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A052. doi:10.1115/DETC2018-85318.

Complex engineered systems are often associated with risk due to high failure consequences, high complexity, and large investments. As a result, it is desirable for complex engineered systems to be resilient such that they can avoid or quickly recover from faults. Ideally, this should be done at the early design stage where designers are most able to explore a large space of concepts. Previous work has shown that functional models can be used to predict fault propagation behavior and motivate design work. However, little has been done to formally optimize a design based on these predictions, partially because the effects of these models have not been quantified into an objective function to optimize. This work introduces a scoring function which integrates with a fault scenario-based simulation to enable the risk-neutral optimization of functional model resilience. This scoring function accomplishes this by resolving the tradeoffs between the design costs, operating costs, and modeled fault response of a given design in a way that may be parameterized in terms of designer-specified resilient features. This scoring function is adapted and applied to the optimization of controlling functions which recover flows in a monopropellant orbiter. In this case study, an evolutionary algorithm is found to find the optimal logic for these functions, showing an improvement over a typical a-priori guess by exploring a large range of solutions, demonstrating the value of the approach.

Topics: Resilience
Commentary by Dr. Valentin Fuster
2018;():V02AT03A053. doi:10.1115/DETC2018-85373.

Testing of components at higher-than-nominal stress level provides an effective way of reducing the required testing effort for system reliability assessment. Due to various reasons, not all components are directly testable in practice. The missing information of untestable components poses significant challenges to the accurate evaluation of system reliability. This paper proposes a sequential accelerated life testing (SALT) design framework for system reliability assessment of systems with untestable components. In the proposed framework, system-level tests are employed in conjunction with component-level tests to effectively reduce the uncertainty in the system reliability evaluation. To minimize the number of system-level tests which are much more expensive than the component-level tests, the accelerated life testing design is performed sequentially. In each design cycle, testing resources are allocated to component-level or system-level tests according to the uncertainty analysis from system reliability evaluation. The component-level or system-level testing information obtained from the optimized testing plans are then aggregated to obtain the overall system reliability estimate using Bayesian methods. The aggregation of component-level and system-level testing information allows for an effective uncertainty reduction in the system reliability evaluation. Results of two numerical examples demonstrate the effectiveness of the proposed method.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A054. doi:10.1115/DETC2018-85452.

The concept of resilience has emerged from various domains to address how systems, people and organizations can handle uncertainty. This paper presents a method to improve the resilience of an engineering system by maximizing the system economic lifecycle value, as measured by Net Present Value, under uncertainty. The method is applied to a Waste-to-Energy system based in Singapore and the impact of combining robust and flexible design strategies to improve resilience are discussed. Robust strategies involve optimizing the initial capacity of the system while Bayesian Networks are implemented to choose the flexible expansion strategy that should be deployed given the current observations of demand uncertainties. The Bayesian Network shows promise and should be considered further where decisions are more complex. Resilience is further assessed by varying the volatility of the stochastic demand in the simulation. Increasing volatility generally made the system perform worse since not all demand could be converted to revenue due to capacity constraints. Flexibility shows increased value compared to a fixed design. However, when the system is allowed to upgrade too often, the costs of implementation negates the revenue increase. The better design is to have a high initial capacity, such that there is less restriction on the demand with two or three expansions.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A055. doi:10.1115/DETC2018-85708.

Managing potential disruptive events at the operating phase of an engineered system therefore improving the system’s failure resilience is an importance yet challenging task in engineering design. The resilience of an engineered system can be improved by enhancing the failure restoration capability of the system with appropriate system control strategies. Therefore, control-guided failure restoration is an essential step in engineering design for resilience. Considering different characteristics of disruptive events and their impacts to the performance of a system, effective control strategies for the failure restoration must be selected correspondingly. However, the challenge is to develop generally applicable guiding principles for selecting effective control strategies thus implementing the control-guided failure restorations. In this paper, a comparison of three commonly used control strategies for dynamic system control is conducted with the focus on the effectiveness of restoring system performance after the system has undergone different major disruptive events. A case study of an electricity transmission system is used to demonstrate the dynamic system modeling and the comparison of three control strategies for disruption management.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A056. doi:10.1115/DETC2018-86294.

This paper presents a time-dependent reliability estimation method for engineering system based on machine learning and simulation method. Due to the stochastic nature of the environmental loads and internal incentive, the physics of failure for mechanical system is complex, and it is challenging to include uncertainties for the physical modeling of failure in the engineered system’s life cycle. In this paper, an efficient time-dependent reliability assessment framework for mechanical system is proposed using a machine learning algorithm considering stochastic dynamic loads in the mechanical system. Firstly, stochastic external loads of mechanical system are analyzed, and the finite element model is established. Secondly, the physics of failure mode of mechanical system at a time location is analyzed, and the distribution of time realization under each load condition is calculated. Then, the distribution of fatigue life can be obtained based on high-cycle fatigue theory. To reduce the calculation cost, a machine learning algorithm is utilized for physical modeling of failure by integrating uniform design and Gaussian process regression. The probabilistic fatigue life of gear transmission system under different load conditions can be calculated, and the time-varying reliability of mechanical system is further evaluated. Finally, numerical examples and the fatigue reliability estimation of gear transmission system is presented to demonstrate the effectiveness of the proposed method.

Topics: Reliability
Commentary by Dr. Valentin Fuster
2018;():V02AT03A057. doi:10.1115/DETC2018-86314.

Design for Assembly (DFA) time estimation method developed by G. Boothroyd and P. Dewhurst allows for estimating the assembly time of artifacts based on analysis of component features using handling and insertion tables by an assembler, who is assumed to assemble the artifact one-part-at-a-time. Using the tables, each component is assigned an assembly time which is based on the time required for the assembler to manipulate (handling time) and the time required for it to interface with the rest of the components (insertion time). Using this assembly time and the ideal assembly time (i.e. the absolute time it takes to assemble the artifact, assuming each component takes the ideal time of three seconds to handle and insert), this method allows to calculate the efficiency of a design’s assembly process. Another tool occasionally used in Design for Manufacturing (DFM) is Failure Modes and Effects Analysis (FMEA). FMEA is used to evaluate and document failure modes and their impact on system performance. Each failure mode is ranked based on its severity, occurrence, and detectability scores, and corrective actions that can be taken to control risk items. FMEA scores of components can address the manufacturing operations and how much effort should be put into each specific component. In this paper, the authors attempt to answer the following two research questions (RQs) to determine the relationships between FMEA scores and the DFA assembly time to investigate if part failure’s severity, occurrence, and detectability can be estimated if handling time and insertion time are known.

RQ (1): Can DFA metrics (handling time and insertion time) be utilized to estimate Failure Mode and Effects scores (severity, occurrence, and detectability)?

RQ (2): How does each response metric relate to predictor metrics (positive, negative, or no relationship)?

This is accomplished by performing Boothroyd and Dewhurst’s DFA time estimation and FMEA on select set of simple products. Since DFA metrics are based on combination of designer’s subjectivity and part’s geometric specifications and FMEA scores are based only on designer’s subjectivity, this paper attempts to estimate part failure severity, occurrence, and detectability less subjectively by using the handling time and insertion time. This will also allow for earlier and faster acquisition of potential part failure information for use in design and manufacturing processes.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A058. doi:10.1115/DETC2018-86347.

Over the past two decades, safety and reliability of lithium-ion (Li-ion) rechargeable batteries have been receiving a considerable amount of attention from both industry and academia. To guarantee safe and reliable operation of a Li-ion battery pack and build failure resilience in the pack, battery management systems (BMSs) should possess the capability to monitor, in real time, the state of health (SOH) of the individual cells in the pack. This paper presents a deep learning method, named deep convolutional neural networks, for cell-level SOH assessment based on the capacity, voltage, and current measurements during a charge cycle. The unique features of deep convolutional neural networks include the local connectivity and shared weights, which enable the model to estimate battery capacity accurately using the measurements during charge. To our knowledge, this is the first attempt to apply deep learning to online SOH assessment of Li-ion battery. 10-year daily cycling data from implantable Li-ion cells are used to verify the performance of the proposed method. Compared with traditional machine learning methods such as relevance vector machine and shallow neural networks, the proposed method is demonstrated to produce higher accuracy and robustness in capacity estimation.

Commentary by Dr. Valentin Fuster

44th Design Automation Conference: Design of Complex Systems

2018;():V02AT03A059. doi:10.1115/DETC2018-85137.

In design process of a complex engineered system, studying the behavior of the system prior to manufacturing plays a key role to reduce cost of design and enhance the efficiency of the system during its lifecycle. To study the behavior of the system in the early design phase, it is required to model the characterization of the system and simulate the system’s behavior. The challenge is the fact that in early design stage, there is no or little information from the real system’s behavior, therefore there is not enough data to use to validate the model simulation and make sure that the model is representing the real system’s behavior appropriately. In this paper, we address this issue and propose methods to validate the model developed in the early design stage. First we propose a method based on FMEA and show how to quantify expert’s knowledge and validate the model simulation in the early design stage. Then, we propose a non-parametric technique to test if the observed behavior of one or more subsystems which currently exist, and the model simulation are the same. In addition, a local sensitivity analysis search tool is developed that helps the designers to focus on sensitive parts of the system in further design stages, particularly when mapping the conceptual model to a component model. We apply the proposed methods to validate the output of failure simulation developed in the early stage of designing a monopropellant propulsion system design.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A060. doi:10.1115/DETC2018-85941.

Systems engineering processes coordinate the efforts of many individuals to design a complex system. However, the goals of the involved individuals do not necessarily align with the system-level goals. Everyone, including managers, systems engineers, subsystem engineers, component designers, and contractors, is self-interested. It is not currently understood how this discrepancy between organizational and personal goals affects the outcome of complex systems engineering processes. To answer this question, we need a systems engineering theory that accounts for human behavior. Such a theory can be ideally expressed as a dynamic hierarchical network game of incomplete information. The nodes of this network represent individual agents and the edges the transfer of information and incentives. All agents decide independently on how much effort they should devote to a delegated task by maximizing their expected utility; the expectation is over their beliefs about the actions of all other individuals and the moves of nature. An essential component of such a model is the quality function, defined as the map between an agent’s effort and the quality of their job outcome. In the economics literature, the quality function is assumed to be a linear function of effort with additive Gaussian noise. This simplistic assumption ignores two critical factors relevant to systems engineering: (1) the complexity of the design task, and (2) the problem-solving skills of the agent. Systems engineers establish their beliefs about these two factors through years of job experience. In this paper, we encode these beliefs in clear mathematical statements about the form of the quality function. Our approach proceeds in two steps: (1) we construct a generative stochastic model of the delegated task, and (2) we develop a reduced order representation suitable for use in a more extensive game-theoretic model of a systems engineering process. Focusing on the early design stages of a systems engineering process, we model the design task as a function maximization problem and, thus, we associate the systems engineer’s beliefs about the complexity of the task with their beliefs about the complexity of the function being maximized. Furthermore, we associate an agent’s problem solving-skills with the strategy they use to solve the underlying function maximization problem. We identify two agent types: “naïve” (follows a random search strategy) and “skillful” (follows a Bayesian global optimization strategy). Through an extensive simulation study, we show that the assumption of the linear quality function is only valid for small effort levels. In general, the quality function is an increasing, concave function with derivative and curvature that depend on the problem complexity and agent’s skills.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A061. doi:10.1115/DETC2018-86013.

In this paper, a deep reinforcement learning approach was implemented to achieve autonomous collision avoidance. A transfer reinforcement learning approach (TRL) was proposed by introducing two concepts: transfer belief — how much confidence the agent puts in the expert’s experience, and transfer period — how long the agent’s decision is influenced by the expert’s experience. Various case studies have been conducted on transfer from a simple task — single static obstacle, to a complex task — multiple dynamic obstacles. It is found that if two tasks have low similarity, it is better to decrease initial transfer belief and keep a relatively longer transfer period, in order to reduce negative transfer and boost learning. Student agent’s learning variance grows significantly if using too short transfer period.

Commentary by Dr. Valentin Fuster
2018;():V02AT03A062. doi:10.1115/DETC2018-86049.

Long-lived systems will experience many successive changes during their lifecycle as they are adapted to meet new system requirements. Existing change propagation tools predict how changes to a system’s design at a fixed point in its life are likely to spread, but have not been extended to consider a series of successive modifications where the change probabilities update. This change in propagation probabilities in response to successive changes is introduced as Dynamic Change Propagation (DCP). This paper integrates research from change propagation, network theory, and excess to achieve the following objectives: 1) describe how a DCP model predicts system propagation change trajectories, 2) use a new synthetic test case generator to correlate network parameters like degree distribution with DCP, and 3) determine the correlations between a measure of DCP and a selection of existing change propagation metrics. Results indicate that DCP is limited by reducing the number of dependencies between components (affirming the usefulness of adding modularity to a system) and including high degree component ‘hubs’ between components.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In