0

ASME Conference Presenter Attendance Policy and Archival Proceedings

2015;():V02AT00A001. doi:10.1115/DETC2015-NS2A.
FREE TO VIEW

This online compilation of papers from the ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2015) represents the archival version of the Conference Proceedings. According to ASME’s conference presenter attendance policy, if a paper is not presented at the Conference, the paper will not be published in the official archival Proceedings, which are registered with the Library of Congress and are submitted for abstracting and indexing. The paper also will not be published in The ASME Digital Collection and may not be cited as a published paper.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Application-Tailored Optimization Methods

2015;():V02AT03A001. doi:10.1115/DETC2015-46311.

This paper considers multiobjective optimization under uncertainty (MOOUC) for the selection of optimal cutting conditions in advanced abrasive machining processes. Processes considered are water-jet machining, abrasive water-jet machining and ultra-sonic machining. Decisions regarding the cutting conditions can involve optimization for multiple competing goals; such as surface finish, machining time and power consumption. In practice, there is also an issue of variations in the ability to attain the performance goals. This can be due to limitations in machine accuracy or variations in material properties of the workpiece and/or abrasive particles. The approach adopted in this work relies on a Strength Pareto Evolutionary Algorithm (SPEA2) framework, with specially tailored dominance operators to account for probabilistic aspects in the considered multiobjective problem. Deterministic benchmark problems in the literature for the considered machining processes are extended to include performance uncertainty, and then used in testing the performance of the proposed approach. Results of the study show that accounting for process variations through a simple penalty term may be detrimental for the multiobjective optimization. On the other hand, a proposed Fuzzy-tournament dominance operator appears to produce favorable results.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A002. doi:10.1115/DETC2015-46472.

This paper presents a theoretical study on optimizing the mixing ratios of hydrocarbon blends to be used as refrigerants in existing refrigeration equipment. The primary objective is to maximize the coefficient of performance. The gas blending optimization problem is posed in a multi-objective framework, where the optimization seeks to generate Pareto optimal solutions that span the trade-off frontier between coefficient of performance versus deviation from a desired volumetric refrigeration capacity, while adhering to a maximum compression ratio. Design variables in the optimization are the mass fractions of hydrocarbon gases in the blend. A domain reduction scheme is introduced, which allows for efficient conduction of exhaustive search, with up to three hydrocarbon gases in the blend. While exhaustive search guarantees that the obtained solutions are global optima, the computational resources it requires scale poorly as the number of design variables increase. Two alternative approaches, (multi-start SQP) and (NSGA-II) are also tested for solving the optimization problem. Numerical simulation case studies for replacement of R12, R22 and R134a with hydrocarbon blends of isobutane, propane and propylene show agreement between solution methods that good compromises are possible to achieve, but a small loss in coefficient of performance is inevitable.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A003. doi:10.1115/DETC2015-47532.

A recent and promising technique to overcome the challenges of conventional drilling is vibration-assisted drilling (VAD) whereby a controlled harmonic motion is superimposed over the principal drilling feed motion in order to create an intermittent cutting state. Two additional variables other than the feed and the speed are introduced, namely the frequency and the amplitude of the imposed vibrations. Optimum selection of cutting conditions in VAD operations of composite materials is a challenging task due to several reasons; such as the increase in the number of controllable variables, the need for costly experimentation, and the limitation on the number of experiments that can be performed before tool degradation becomes an issue in the reliability of measurements. Additionally, there are often several objectives to consider, some of which may be conflicting, while others may be somewhat correlated. Pareto-optimality analysis is needed for conflicting objectives; however the existence of several objectives (high-dimension Pareto space) makes the generation and interpretation of Pareto solutions difficult. An attractive approach to the optimization task is thus to employ Kriging meta-models in a multi-objective efficient global optimization (m-EGO) framework for incremental experimentation of optimal setting of the cutting parameters. Additional challenge posed by constraints on machine capabilities is accounted for through domain transformation of the design variables prior to the construction of the Kriging models. Study results using a baseline exhaustive experimental data shows opportunity for employing m-EGO for the generation of well distributed Pareto-frontiers with fewer experiments.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Artificial Intelligence and Computational Synthesis

2015;():V02AT03A004. doi:10.1115/DETC2015-46078.

Metal Organic Responsive Frameworks (MORFs) are a proposed new class of smart materials consisting of a Metal Organic Framework (MOF) with photoisomerizing beams (also known as linkers) that fold in response to light. Within a device these new light responsive materials could provide the capabilities such as photo-actuation, photo-tunable rigidity, and photo-tunable porosity. However, conventional MOF architectures are too rigid to allow isomerization of photoactive sub-molecules. We propose a new computational approach for designing MOF linkers to have the required mechanical properties to allow the photoisomer to fold by borrowing concepts from de novo molecular design and graph synthesis. Here we show how this approach can be used to design compliant linkers with the necessary flexibility to be actuated by photoisomerization and used to design MORFs with desired functionality.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A005. doi:10.1115/DETC2015-46236.

In many design and manufacturing applications, data inconsistency or noise is common. These data can be used to create opportunities and/or support critical decisions in many applications, for example, welding quality prediction for material selection and quality monitoring applications. Typical approaches to deal with these data issues are to remove or alter them before constructing any model or conducting any analysis to draw decisions. However, these approaches are limited especially when each data carries important value to extract additional information about the nature of the given problem. In the literature, with the presence of noise in data, bootstrap aggregating has shown an improvement in the prediction accuracy. In order to achieve such an improvement, a bagging model has to be carefully constructed. The base learning algorithm, number of base learning algorithms, and parameters for the base learning algorithms are crucial design parameters in that aspect. Evolutionary algorithms such as genetic algorithm and particle swarm optimization have shown promising results in determining good parameters for different learning algorithms such as multilayer perceptron neural network and support vector regression. However, the computational cost of an evolutionary computation algorithm is usually high as they require a large number of candidate solution evaluations. This requirement even more increases when bagging is involved rather than a single learning algorithm. To reduce such high computational cost, a metamodeling approach is introduced to particle swarm optimization. The meta-modeling approach reduces the number of fitness function evaluations in the particle swarm optimization process and therefore the overall computational cost can be reduced. In this paper, we propose a prediction modeling framework whose aim is to construct a bagging model to improve the prediction accuracy on noisy data. The proposed framework is tested on an artificially generated noisy dataset. The quality of final solutions obtained by the proposed framework is reasonable compared to particle swarm optimization without meta-modeling. In addition, using the proposed framework, the largest improvement in the computational time is about 42 percent.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A006. doi:10.1115/DETC2015-46373.

Synthesizing principle solutions (PSs) in various disciplines together is a common practice in multi-disciplinary conceptual design (MDCD), which generates the combination of PSs to meet the desired functional requirement. Different from structure- and function-based synthesis methods, a hybrid PS synthesis (HPSS) method through integrating functional and structural knowledge is proposed in this paper, which not only achieves the automated synthesis of multi-disciplinary PSs, but also resolves the undesired physical conflicts during the synthesis process. It comprises of united representation approach for modeling functional and structural knowledge of multi-disciplinary PSs, adapted agent-based approach for chaining the specified functional flows of PSs, and the extension conflict resolve approach for handling the partial design conflicts among selected PSs. An industrial case study on the emergency cutting off (ECO) device design was given to validate the applicability of HPSS, and it indicates that HPSS can not only get multi-disciplinary design result of ECO device, but also further resolve the design conflict (i.e., vibration impact) to optimize the functional structure of ECO device.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A007. doi:10.1115/DETC2015-46761.

Design grammars have been successfully applied in numerous engineering disciplines, e.g. in electrical engineering, architecture and mechanical engineering. A successful application of design grammars in Computational Design Synthesis (CDS) requires a) a meaningful representation of designs and the design task at hand, b) a careful formulation of grammar rules to synthesize new designs, c) problem specific design evaluations, and d) the selection of an appropriate algorithm to guide the synthesis process. Managing these different aspects of CDS requires not only a detailed understanding of each individual part, but also of the interdependencies between them. In this paper, a new method is presented to analyze the exploration of design spaces in CDS. The method analyzes the designs generated during the synthesis process and visualizes how the design space is explored with respect to a) design characteristics, and b) objectives. The selected algorithm as well as the grammar rules can be analyzed with this approach to support the human designer in successfully understanding and applying a CDS method. The case study demonstrates how the method is used to analyze the synthesis of bicycle frames. Two algorithms are compared for this task. Results demonstrate how the method increases the understanding of the different components in CDS. The presented research can be useful for both novices to CDS to help them gain a deeper understanding of the interplay between grammar rules and guidance of the synthesis process, as well as for experts aiming to further improve their CDS application by improving parameter settings of their search algorithms, or by further refining their design grammar. Additionally, the presented method constitutes a novel approach to interactively visualize design space exploration considering not only designs objectives, but also the characteristics and interdependencies of different designs.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A008. doi:10.1115/DETC2015-47687.

This work hypothesizes that enhancing next generation products’ distinctiveness through function-form synthesis results in feasible design concepts for designers. A data mining driven methodology that searches for novel function and form candidates suitable to include in next generation product design is introduced in this work. The methodology employs a topic modeling algorithm to search for functional relationships between the current product design and designs from related/unrelated domains. Combining the current product design and candidate products’ form and function, which is acquired from related/unrelated domains, generates next generation design concepts. These resulting design concepts are not only distinct from their parent designs but are also likely to be implemented in the real world by containing novel functions and form features. A hybrid marine model, which is differentiated from both the current design and candidate products in related/unrelated domains, is introduced in the case study in order to demonstrate the proposed methodology’s potential to develop concepts for novel product domains. By comparing the form and function similarity values between generated design concepts, an existing hybrid marine model (Wing In Ground effect ship: WIG), and source products, this research verifies the feasibility of these design concepts.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Data-Driven Design

2015;():V02AT03A009. doi:10.1115/DETC2015-46836.

We investigate the cost and benefit of crowdsourcing solutions to an NP-complete powertrain design and control problem. Specifically, we cast this optimization problem as an online competition, and received 2391 game plays by 124 anonymous players during the first week from the launch. We compare the performance of human players against that of the Efficient Global Optimization (EGO) algorithm. We show that while only a small portion of human players can outperform the algorithm in long term, players tend to formulate good heuristics early on, from where good solutions can be extracted and used to constrain the solution space. Incorporating this constraint into the search enhances the efficiency of the algorithm, even for problem settings different from the game. These findings indicate that human computation is promising in solving comprehensible and computationally hard optimal design and control problems.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A010. doi:10.1115/DETC2015-46875.

As awareness of environmental issues increases, the pressures from the public and policy makers have forced OEMs to consider remanufacturing as the key product design option. In order to make the remanufacturing operations more profitable, forecasting product returns is critical with regards to the uncertainty in quantity and timing. This paper proposes a predictive model selection algorithm to deal with the uncertainty by identifying better predictive models. Unlike other major approaches in literature (distributed lag model or DLM), the predictive model selection algorithm focuses on the predictive power over new or future returns. The proposed algorithm extends the set of candidate models that should be considered: autoregressive integrated moving average or ARIMA (previous returns for future returns), DLM (previous sales for future returns), and mixed model (both previous sales and returns for future returns). The prediction performance measure from holdout samples is used to find a better model among them. The case study of reusable bottles shows that one of the candidate models, ARIMA, can predict better than the DLM depending on the relationships between returns and sales. The univariate model is widely unexplored due to the criticism that the model cannot utilize the previous sales. Another candidate model, mixed model, can provide a chance to find a better predictive model by combining the ARIMA and DLM. The case study also shows that the DLM in the predictive model selection algorithm can provide a good predictive performance when there are relatively strong and static relationships between returns and sales.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A011. doi:10.1115/DETC2015-47059.

Different from typical mechanical products, tickets for movies and performing arts can be considered as a special type of consumer products. Compared to widely known box-office receipts prediction with single-output in movie industry, estimating the market share and price for performing arts is still a challenging problem due to high dimensional datasets yet limited number of samples. This paper describes a data-driven decision support system to help arts managers make strategic decisions, especially on session-determination and price-setting, considering price discrimination and prediction on the corresponding sales volume. Eight different attributes from the database, with multiple labels in each attribute, are used to accurately and comprehensively represent and classify the characteristics of performing arts in each genre. A web-based influence factor is also defined to quantify the popularity and publicity of performing arts. For this multi-input and multi-output problem, support vector regression (SVR) is employed and its optimal parameters are determined using genetic algorithm (GA) and particle swarm optimization (PSO) respectively. Price utility axiom with the law of demand is applied to maximize the receipts. Compared to artificial neural networks (ANN), those two optimization based SVR methods perform much better, in terms of effectiveness and reliability.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A012. doi:10.1115/DETC2015-47383.

This paper presents a novel application of gamification for collecting high-level design descriptions of objects. High-level design descriptions entail not only superficial characteristics of an object, but also function, behavior, and requirement information of the object. Such information is difficult to obtain with traditional data mining techniques. For acquisition of high-level design information, we investigated a multiplayer game, “Who is the Pretender?” in an offline context. Through a user study, we demonstrate that the game offers a more fun, enjoyable, and engaging experience for providing descriptions of objects than simply asking people to list them. We also show that the game elicits more high-level, problem-oriented requirement descriptions and less low-level, solution-oriented structure descriptions due to the unique game mechanics that encourage players to describe objects at an abstract level. Finally, we present how crowdsourcing can be used to generate game content that facilitates the gameplay. Our work contributes towards acquiring high-level design knowledge that is essential for developing knowledge-based CAD systems.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A013. doi:10.1115/DETC2015-47541.

This paper presents a method to automatically extract function knowledge from natural language text. Our method uses syntactic rules to extract subject-verb-object triplets from parsed text. We then leverage the Functional Basis taxonomy, WordNet, and word2vec to classify the triplets as artifact-function-energy flow knowledge. For evaluation, we compare the function definitions associated with 30 most frequent artifacts compiled in a human-constructed knowledge base, Oregon State University’s Design Repository (DR), to those extracted using our method from 4953 Wikipedia pages classified under the category “Machines”. Our method found function definitions for 66% of the test artifacts. For those artifacts found, our method identified 50% of the function definitions compiled in DR. In addition, 75% of the most frequent function definitions found by our method were also defined in DR. The results demonstrate the promising potential of our method in automatic extraction of function knowledge.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Decision Making in Engineering Design

2015;():V02AT03A014. doi:10.1115/DETC2015-46349.

Topology optimization is a systematic method of generating designs that maximize specific objectives. While it offers significant benefits over traditional shape optimization, topology optimization can be computationally demanding and laborious. Even a simple 3D compliance optimization can take several hours. Further, the optimized topology must typically be manually interpreted and translated into a CAD-friendly and manufacturing friendly design.

This poses a predicament: given an initial design, should one optimize its topology? In this paper, we propose a simple metric for predicting the benefits of topology optimization. The metric is derived by exploiting the concept of topological sensitivity, and is computed via a finite element swapping method. The efficacy of the metric is illustrated through numerical examples.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A015. doi:10.1115/DETC2015-46457.

George Box a British mathematician and professor of statistics wrote that “essentially, all models are wrong, but some are useful.” In keeping with George Box’s observation we suggest that in the model-based realization of complex systems, the decision maker must be able to work constructively with decision models of varying fidelity, completeness and accuracy in order to make defendable decisions under uncertainty. The models, and the search algorithms that use these models, will never be perfect and the inherent inaccuracy and incompleteness of analysis models and solvers manifest as uncertainties in the projected outcomes. Therefore, a significant and desirable step in any model-based application is to find stable and robust solutions in which variation of the (input) variables and parameters within manageable tolerances has the minimum effect on delivering favorable, system performance. In this paper we present a method for visualizing and exploring the solution space using the compromise Decision Support Problem (cDSP) as a decision model to aid a decision maker in finding these stable and robust solutions.

The efficacy of the method is illustrated using the design of a shell and tube heat exchanger as an example. The method is generalizable to other decision constructs. Our emphasis is on the method rather than the results per se.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A016. doi:10.1115/DETC2015-46493.

This paper details the research into Axis Symmetric Architecture (ASA) Proton Exchange Membrane (PEM) fuel cells, which are fuel cells that possess a non-prismatic cylindrical architecture as compared to the traditional flat plate designs. The paper elucidates the advantages on the ASA designs versus flat plate designs, including increased fuel flow characteristics, reduced sealing area, reduced weight characteristics and increased power densities (power/weight ratios). Finite element analysis will show improvements to flow characteristics on both the cathode and anode side along with a study of the flow channel cross-sections. The ASA design facilitates natural convective flow to promote improved reactant availability and the prototypes created show the ease of manufacture and assembly. ASA’s unlike traditional fuel cells do not require bulky clamping plates and extensive fastening mechanisms and hence lead to prototypes with reduced size, weight and cost.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A017. doi:10.1115/DETC2015-46495.

Research on decision making in engineering design has focused primarily on how to make decisions using normative models given certain information. However, there exists a research gap on how diverse information stimuli are combined by designers in decision making. In this paper, we address the following question: how do designers weigh different information stimuli to make decisions in engineering design contexts? The answer to this question can provide insights on diverse cognitive models for decision making used by different individuals. We investigate the information gathering behavior of individuals using eye gaze data from a simulated engineering design task. The task involves optimizing an unknown function using an interface which provides two types of information stimuli, including a graph and a list area. These correspond to the graphical stimulus and numerical stimulus, respectively. The study was carried out using a set of student subjects. The results suggest that individuals weigh different forms of information stimuli differently. It is observed that graphical information stimulus assists the participants in optimizing the function with a higher accuracy. This study contributes to our understanding of how diverse information stimuli are utilized by design engineers to make decisions. The improved understanding of cognitive decision making models would also aid in improved design of decision support tools.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A018. doi:10.1115/DETC2015-46547.

Complex systems often have long life cycles with requirements that are likely to change over time. Therefore, it is important to be able to adapt the system accordingly over time. This is often accomplished by infusing new technologies into the host system in order to update or improve overall system performance. However, technology infusion often results in a disruption in the host system. This can take the form of a system redesign or a change in the inherent attributes of the system. In this study, we analyzed the impact of technology infusion on system attributes, specifically the complexity and modularity. Two different systems that were infused with new technologies were analyzed for changes in complexity and modularity.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A019. doi:10.1115/DETC2015-46864.

As electronic waste (e-waste) becomes one of the fastest growing environmental concerns, remanufacturing is considered as a promising solution. However, the profitability of take back systems is hampered by several factors including the lack of information on the quantity and timing of to-be-returned used products to a remanufacturing facility. Product design features, consumers’ awareness of recycling opportunities, socio-demographic information, peer pressure, and the tendency of customer to keep used items in storage are among contributing factors in increasing uncertainties in the waste stream. Predicting customer choice decisions on returning back used products, including both the time in which the customer will stop using the product and the end-of-use decisions (e.g. storage, resell, through away, and return to the waste stream) could help manufacturers have a better estimation of the return trend. The objective of this paper is to develop an Agent Based Simulation (ABS) model integrated with Discrete Choice Analysis (DCA) technique to predict consumer decisions on the End-of-Use (EOU) products. The proposed simulation tool aims at investigating the impact of design features, interaction among individual consumers and socio-demographic characteristics of end users on the number of returns. A numerical example of cellphone take-back system has been provided to show the application of the model.

Topics: Simulation , Design , Modeling
Commentary by Dr. Valentin Fuster
2015;():V02AT03A020. doi:10.1115/DETC2015-46909.

Design is a sequential decision process that increases the detail of modeling and analysis while simultaneously decreasing the space of alternatives considered. In a decision theoretic framework, low-fidelity models help decision-makers identify regions of interest in the tradespace and cull others prior to constructing more computationally expensive models of higher fidelity. The method presented herein demonstrates design as a sequence of finite decision epochs through a search space defined by the extent of the set of designs under consideration, and the level of analytic fidelity subjected to each design. Previous work has shown that multi-fidelity modeling can aid in rapid optimization of the design space when high-fidelity models are coupled with low-fidelity models. This paper offers two contributions to the design community: (1) a model of design as a sequential decision process of refinement using progressively more accurate and expensive models, and (2) a connected approach for how conceptual models couple with detailed models. Formal definitions of the process are provided, and a simple one-dimensional example is presented to demonstrate the use of sequential multi-fidelity modeling in determining an optimal modeling selection policy.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A021. doi:10.1115/DETC2015-47519.

This paper presents a mathematical model investigating the physics behind pressure-compensating (PC) drip irrigation emitters. A network of PC emitters, commonly known as drip irrigation, is an efficient way to deliver water to crops while increasing yield. Irrigation can provide a means for farmer to grow more sensitive, and profitable crops and help billions of small-holder farmers lift themselves out of poverty. Making drip irrigation accessible and economically viable is important for developing farmers as most face the challenges of water scarcity, declining water tables and lack of access to an electrical grid. One of the main reasons for the low adoption rate of drip irrigation in the developing world is the relatively high cost of the pumping power. It is possible to reduce this cost by reducing the required activation pressure of the emitters, while maintaining the PC behavior. The work presented here provides a guide of how design changes in the emitter could allow for a reduction in the activation pressure from 1 bar to approximately 0.1 bar. This decrease in the activation pressure of each emitter in turn decreases the system driving pressure. This reduction of driving pressure will decrease the energy need of pumping, making a solar-powered system affordable for small-acreage farmers.

This paper develops a mathematical model to describe the PC behavior in a commercially available emitter. It is a 2D model that explains the relationship between the pressure, structural deformation and fluid flow within a PC emitter. A parametric study has been performed to understand the effects of geometric and material parameters with regards to the activation pressure and PC behavior. This knowledge will help guide the designs and prototypes of optimized emitters with a lower activation pressure, while also providing the PC behavior.

Topics: Pressure
Commentary by Dr. Valentin Fuster
2015;():V02AT03A022. doi:10.1115/DETC2015-47598.

When considering the redesign of an existing product, designers must consider possible engineering and marketing ramifications. Ideal changes capture a large portion of the market and have a low risk of change propagation that results in reduced cost to the manufacturer. Engineering change tools such as the Change Prediction Method and market research models such as Hierarchical Bayes Mixed Logit allow designers to estimate the cost of the redesign process and market shares of preference. Variability in the inputs of the Change Prediction Method (impact and likelihood values) results in a range of redesign cost values. Assumptions regarding model form and the randomness used in model fitting also lead to variations when estimating market performance. When the variability associated with these techniques is considered, focus should shift from a point-estimate to a region-estimate. This paper explores the region-estimate produced for proposed redesigns when considering rework cost and market share of preference.

Topics: Chaos , Fittings , Risk , Preferences
Commentary by Dr. Valentin Fuster
2015;():V02AT03A023. doi:10.1115/DETC2015-47667.

Complex system design requires managing competing objectives between many subsystems. Previous field research has demonstrated that subsystem designers may use biased information passing as a negotiation tactic and thereby reach sub-optimal system-level results due to local optimization behavior. One strategy to combat the focus on local optimization is an incentive structure that promotes system-level optimization. This paper presents a new subsystem incentive structure based on Multi-disciplinary Optimization (MDO) techniques for improving robustness of the design process to such biased information passing strategies. Results from simulations of different utility functions for a test suite of multi-objective problems quantify the system robustness to biased information passing strategies. Results show that incentivizing subsystems with this new weighted structure may decrease the error resulting from biased information passing.

Topics: Robustness
Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Design and Optimization of Sustainable Energy Systems

2015;():V02AT03A024. doi:10.1115/DETC2015-46471.

Small-scale residential solar photovoltaic (PV) systems are becoming increasingly common. In some cases, governments or individual homeowners promote PV technology because of concerns about climate change and a desire to reduce global greenhouse gas emissions (GHGs). While solar PV directly emits no GHGs during use, the panels are associated with a significant amount of embedded GHG emissions, resulting from the manufacturing of the panels, for instance. A review of relevant literature reveals that the life cycle GHG emissions of solar PV panels are significantly influenced by contextual factors, such as the location of the panels during use. The purpose of this paper is to illustrate the many ways context could affect the GHG emissions associated with solar PV systems and to demonstrate — via calculations from a simple analytical model — the potential magnitude of the GHG emissions differences associated with using PV panels in different contexts.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A025. doi:10.1115/DETC2015-46553.

Currently, ocean wave energy is a novel means of electricity generation that is projected to potentially serve as a primary energy source in coastal areas. However, for wave energy converters (WECs) to be applicable on a scale that allows for grid implementation, these devices will need to be placed in close relative proximity to each other. From what’s been learned in the wind industry of the U.S., the placement of these devices will require optimization considering both cost and power. However, current research regarding optimized WEC layouts only considers the power produced. This work explores the development of a genetic algorithm (GA) that will create optimized WEC layouts where the objective function considers both the economics involved in the array’s development as well as the power generated. The WEC optimization algorithm enables the user to either constrain the number of WECs to be included in the array, or allow the algorithm to define this number. To calculate the objective function, potential arrays are evaluated using cost information from Sandia National Labs Reference Model Project, and power development is calculated such that WEC interaction affects are considered. Results are presented for multiple test scenarios and are compared to previous literature, and the implications of a priori system optimization for offshore renewables are discussed.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A026. doi:10.1115/DETC2015-46785.

Solar-powered water desalination is one of the promising approaches for addressing fresh water scarcity in the Middle-East, North Africa, and areas of similar climate around the world. Humidification-dehumidification (HDH) is a scalable, commercially-viable technology that primarily utilizes thermal energy in order to extract fresh water from a high salinity water source. Because of inherent variability and uncertainty in solar energy availability due to daily and seasonal cycles, solar-powered HDH desalination systems may benefit from installing thermal energy storage (TES). TES can allow higher utilization of the installed system components and thus reduce the overall lifecycle cost of fresh water production. This work presents a configuration for a HDH desalination system augmented by TES. The system is optimized using Genetic Algorithms (GA) for minimum total annual cost (TAC) per unit volume of produced potable water while satisfying a preset potable water demand. The optimum results for the same location and cost function are compared with results from a previous system which does not have TES. The comparison shows a considerable reduction in potable water production cost when TES is utilized in addition to the benefit of smaller variation in water production across the day.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A027. doi:10.1115/DETC2015-46849.

Solar power ramp events (SPREs) significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to enhance the state of the art in SPRE detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A028. doi:10.1115/DETC2015-46876.

This paper proposes a mixed integer linear programming formulation of the unrestricted wind farm layout optimization problem. The formulation is achieved in part by changing the objective from power generation maximization to the maximization of the smallest penalized downstream distance between any pair of turbines, where downstream distance is defined as the distance between any pair of turbines with overlapping wake cones. The proposed formulation also linearizes other non-linear characteristics of the unrestricted layout optimization problem such as proximity constraints and wake-cone membership. The main advantage of the proposed approach is that an optimal solution to the proposed formulation is guaranteed to be globally optimal. This is in contrast to previous approaches with non-linear formulations that do not come with such guarantees.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A029. doi:10.1115/DETC2015-47290.

In light of the growing strain on the energy grid and the increased awareness of the significant role buildings play within the energy ecosystem, the need for building operational strategies which minimize energy consumption has never been greater. One of the major hurdles impeding this realization primarily lies not in the lack of decision strategies, but in their inherent lack of adaptability. With most operational strategies partly dictated by a dynamic trio of social, economic and environmental factors which include occupant preference, energy price and weather conditions, it is important to realize and capitalize on this dynamism to open up new avenues for energy savings. This paper extends this idea by developing a dynamic optimization mechanism for Net-zero building clusters. A bi-level operation framework is presented to study the energy tradeoffs resulting from the adaptive measures adopted in response to hourly variations in energy price, energy consumption and indoor occupant comfort preferences. The experimental results verify the need for adaptive decision frameworks and demonstrate, through Pareto analysis, that the approach is capable of exploiting the energy saving opportunities made available through fluctuations in energy price and occupant comfort preferences.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A030. doi:10.1115/DETC2015-47509.

Many remote communities rely on diesel generators as their primary power source, which is expensive and harmful to the environment. Renewable energy systems, based on photovoltaics and wind turbines, present a more sustainable and potentially cost-effective option for remote communities with abundant sun and wind. Designing and implementing community-owned and operated renewable power generation alternatives for critical infrastructure such as hospitals, water sanitation, and schools is one approach towards community autonomy and resiliency. However, configuring a cost-effective and reliable renewable power system is challenging due to the many design choices to be made, the large variations in the renewable power sources, and the location specific renewable power source availability. This paper presents an optimization-based approach to aid the configuration of a solar photovoltaic (PV), wind turbine generator and lead-acid battery storage hybrid power system. The approach, implemented in MATLAB, uses a detailed time-series system model to analyze system Loss of Load Probability (LOLP) and a lifetime system cost model to analyze system cost. These models are coupled to a genetic algorithm to perform a multi-objective optimization of system reliability and cost.

The method was applied to two case studies to demonstrate the approach: a windy location (Gibraltar, UK), and a predominantly sunny location (Riyadh, Saudi Arabia). Hourly solar and wind resource data was extracted for these locations from the National Oceanic and Atmospheric Administration for five-year data sets. The village load requirements were statistically generated from a mean daily load for the community estimated based on the population and basic electricity needs. The case studies demonstrate that the mix and size of technologies is dependent on local climatic conditions. In addition, the results show the tradeoff between system reliability and cost, allowing designers to make important decisions for the remote communities.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A031. doi:10.1115/DETC2015-47535.

Recently, land has been exploited extensively for onshore wind farms and turbines are frequently located in proximity to human dwellings, natural habitats, and infrastructure. This proximity has made land use constraints and noise generation and propagation matters of increasing concern for all stakeholders. Hence, wind farm layout optimization approaches should be able to consider and address these concerns. In this study, we perform a constrained multi-objective wind farm layout optimization considering energy and noise as objective functions, and considering land use constraints arising from landowner participation, environmental setbacks and proximity to existing infrastructure. The optimization problem is solved with the NSGA-II algorithm, a multi-objective, continuous variable Genetic Algorithm. A novel hybrid constraint handling tool that uses penalty functions together with Constraint Programming algorithms is introduced. This constraint handling tool performs a combination of local and global searches to find feasible solutions. After verifying the performance of the proposed constraint handling approach with a suite of test functions, it is used together with NSGA-II to optimize a set of wind farm layout optimization test cases with different number of turbines and under different levels of land availability (constraint severity). The optimization results illustrate the potential of the new constraint handling approach to outperform existing constraint handling approaches, leading to better solutions with fewer evaluations of the objective functions and constraints.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A032. doi:10.1115/DETC2015-47651.

The aim of wind farm design is to maximize energy production and minimize cost. In particular, optimizing the placement of turbines in a wind farm is crucial to minimize the wake effects that impact energy production. Most work on wind farm layout optimization has focused on flat terrains and spatially uniform wind regimes. In complex terrains, however, the lack of accurate analytical wake models makes it difficult to evaluate the performance of layouts quickly and accurately as needed for optimization purposes. This paper proposes an algorithm that couples computational fluid dynamics (CFD) with mixed-integer programming (MIP) to optimize layouts in complex terrains. High-fidelity CFD simulations of wake propagation are utilized in the proposed algorithm to constantly improve the accuracy of the predicted wake effects from upstream turbines in complex terrains. By exploiting the deterministic nature of MIP layout solutions, the number of expensive CFD simulations can be reduced significantly. The proposed algorithm is demonstrated on the layout design of a wind farm domain in Carleton-sur-Mer, Quebec, Canada. Results show that the algorithm is capable of producing good wind farm layouts in complex terrains while minimizing the number of computationally expensive wake simulations.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A033. doi:10.1115/DETC2015-48030.

Residential solar photovoltaic (PV) systems are becoming increasingly common around the world. Much of this growth is attributed to a decreasing cost of solar PV modules, reduction in the cost of installation and other “soft costs,” along with net-metering, financial incentives, and the growing societal interest in low-carbon energy. Yet this steep rise in distributed, uncontrolled solar PV capacity is being met with growing concern in maintaining electric grid stability when solar PV reaches higher penetration levels. Rapid reductions in solar PV output create an immediate and direct rise in the net system load. Demand response and storage technologies can offset these fluctuations in the net system load, but their potential has yet to be realized through wide-scale commercial dissemination. In the interim these fluctuations will continue to cause technical and economic challenges to the utility and the end-user. Late-afternoon peak demands are of particular concern as solar PV drops off and household demand rises as residents return home. Transient environmental factors such as clouding, rain, and dust storms pose additional uncertainties and challenges. This study analyzes such complex cases by simulating residential loads, rooftop solar PV output, and dust storm effects on solar PV output to examine transients in the net system load. The Phoenix, Arizona metropolitan area is used as a case study that experiences dust storms several times per year. A dust storm is simulated progressing over the Phoenix metro in various directions and intensities. Various solar PV penetration rates are also simulated to allow insight into resulting net loads as PV penetration grows in future years.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Design for Market Systems

2015;():V02AT03A034. doi:10.1115/DETC2015-46491.

Car-sharing services promise “green” transportation systems. Two vehicle technologies offer marketable, sustainable sharing: Autonomous vehicles eliminate customer requirements for car pick-up and return, and battery electric vehicles entail zero-emissions. Designing an Autonomous Electric Vehicle (AEV) fleet must account for the relationships among fleet operations, charging station operations, electric powertrain performance, and consumer demand. This paper presents a system design optimization framework integrating four sub-system problems: Fleet size and assignment schedule; number and locations of charging stations; vehicle powertrain requirements; and service fees. A case study for an autonomous fleet operating in Ann Arbor, Michigan, is used to examine AEV sharing system profitability and feasibility for a variety of market scenarios.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A035. doi:10.1115/DETC2015-46657.

A two-sided market involves two different user groups whose interactions are enabled over a platform that provides a distinct set of values to either side. In such market systems, one side’s participation depends on the value created by presence of the other side over the platform. Two-sided market platforms must acquire enough users on both sides in appropriate proportions to generate value to either side of the user market. In this paper, we present a simplified, generic mathematical model for two-sided markets with an intervening platform that enables interaction between the two different sets of users with distinct value propositions. The proposed model captures both the same side as well as cross-side effects (i.e., network externalities) and can capture any behavioral asymmetry between the different sides of the two-sided market system. The cross-side effects are captured using the notion of affinity curves while same side effects are captured using four rate parameters. We demonstrate the methodology on canonical affinity curves and comment on the attainment of stability at the equilibrium points of two-sided market systems. Subsequently a stochastic choice-based model of consumers and developers is described to simulate a two-sided market from grounds-up and the observed affinity curves are documented. Finally we discuss how the two-sided market model links with and impacts the engineering characteristics of the platform.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A036. doi:10.1115/DETC2015-47632.

Market-based product design has typically used compensatory models that assume a simple additive part-worth rule. However, marketing literature has demonstrated that consumers use various heuristics called noncompensatory choices to simplify their choice decisions. This study aims to explore the suitability of compensatory modeling of these noncompensatory choices for the product design search. This is motivated by the limitations of the existing Bayesian-based noncompensatory mode, such as the screening rule assumptions, probabilistic representation of noncompensatory choices, and discontinuous choice probability functions in the Bayesian-based noncompensatory model. Results from using compensatory models show that noncompensatory choices can lead to distinct segments with extreme part-worths. In addition, the product design search problem suggests that the compensatory model would be preferred due to small design errors and inexpensive computational burden.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Design for Resilience and Failure Recovery

2015;():V02AT03A037. doi:10.1115/DETC2015-46932.

Data-driven prognostics typically requires sufficient offline training data sets for accurate remaining useful life (RUL) prediction of engineering products. This paper investigates performances of typical data-driven methodologies when the amount of training data sets is insufficient. The purpose is to better understand these methodologies especially when offline training datasets are insufficient. The neural network, similarity-based approach, and copula-based sampling approach were investigated when only three run-to-failure training units were available. The example of lithium-ion (Li-ion) battery capacity degradation was employed for the demonstration.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A038. doi:10.1115/DETC2015-46935.

This paper investigates recent research on battery diagnostics and prognostics especially for Lithium-ion (Li-ion) batteries. Battery diagnostics focuses on battery models and diagnosis algorithms for battery state of charge (SOC) and state of health (SOH) estimation. Battery prognostics elaborates data-driven prognosis algorithms for predicting the remaining useful life (RUL) of battery SOC and SOH. Readers will learn not only basics but also very recent research developments on battery diagnostics and prognostics.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A039. doi:10.1115/DETC2015-46964.

Lithium-ion (Li-ion) rechargeable batteries are used as one of the major energy storage components for implantable medical devices. Reliability of Li-ion batteries used in these devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, patients and physicians. To ensure a Li-ion battery operates reliably, it is important to develop health monitoring techniques that accurately estimate the capacity of the battery throughout its life-time. This paper presents a sparse Bayesian learning method that utilizes the charge voltage and current measurements to estimate the capacity of a Li-ion battery used in an implantable medical device. Relevance Vector Machine (RVM) is employed as a probabilistic kernel regression method to learn the complex dependency of the battery capacity on the characteristic features that are extracted from the charge voltage and current measurements. Owing to the sparsity property of RVM, the proposed method generates a reduced-scale regression model that consumes only a small fraction of the CPU time required by a full-scale model, which makes online capacity estimation computationally efficient. 10 years’ continuous cycling data and post-explant cycling data obtained from Li-ion prismatic cells are used to verify the performance of the proposed method.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A040. doi:10.1115/DETC2015-46999.

The concept of resilience has been explored in diverse disciplines. However, there are only a few which focus on how to quantitatively measure engineering resilience and allocate resilience in engineering system design. This paper is dedicated to exploring the gap between quantitative and qualitative assessments of engineering resilience in the domain of designing complex engineered systems, thus optimally allocating resilience into subsystems and components level in industrial applications. A conceptual framework is first proposed for modeling engineering resilience, and then Bayesian Network is employed as a quantitative tool for the assessment and analysis of engineering resilience for complex systems. One industrial-based case study, a supply chain system, is employed to demonstrate the proposed approach. The proposed resilience quantification and allocation approach using Bayesian Networks would empower system designers to have a better grasp of the weakness and strength of their own systems against system disruptions induced by adverse failure events.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A041. doi:10.1115/DETC2015-47009.

Effective health diagnostics provides benefits such as improved safety, improved reliability, and reduced costs for the operation and maintenance of complex engineered systems. This paper presents a multi-attribute classification fusion approach which leverages the strengths provided by multiple membership classifiers to form a robust classification model for structural health diagnostics. The developed classification fusion approach conducts the health diagnostics with three primary stages: (i) fusion formulation using a k-fold cross validation model; (ii) diagnostics with multiple multi-attribute classifiers as member algorithms; and (iii) classification fusion through a weighted majority voting with dominance system. State-of-the-art classification techniques from three broad categories (i.e., supervised learning, unsupervised learning, and statistical inference) are employed as the member algorithms. The developed classification fusion approach is demonstrated with the 2008 PHM challenge problem. The developed fusion diagnostics approach outperforms any stand-alone member algorithm with better diagnostic accuracy and robustness.

Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Design for the Developing World

2015;():V02AT03A042. doi:10.1115/DETC2015-46513.

As the availability and affordability of consumer products continues to increase around the world, consumers — especially those in developing countries and living on less than $10/day — will express more discerning in their tastes and preferences. Design teams have already been operating in design for the developing world contexts for many years and more are moving into the arena on a regular basis. Many designers do not have cultural knowledge of the customers cultures they are designing for. Cultural ignorance can lead to misinterpretation of customer needs that can lead to products that do not satisfy customer needs and results in disappointed customers, low sales figures, and a frustrated design team. The Customer Needs Cultural Risk Indicator (CNCRI) method introduced in this paper provides a method for design teams to rapidly analyze customer needs for “Risk Indicators” in customer needs based upon cultural differences between the customers and the design team. By understanding early on in the design process where a lack of cultural knowledge may be a risk to the design, the design team can make informed decisions on how to satisfy customer needs effectively.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A043. doi:10.1115/DETC2015-46521.

This paper presents the merits of village-scale photovoltaic (PV) powered electrodialysis reversal (EDR) systems for rural India and the design and analysis of such a system built by the authors with planned testing to be completed in March 2015 in Alamogordo, New Mexico. The requirements for the system include daily water output of 6–15 m3/day (enough potable water for the average village size of 2,000–5,000 people), removal of dissolved salts in addition to biological contaminants, photovoltaic power source, recovery ratio of greater than 85% and appropriate maintenance and service scheme. At present, most village-scale desalination systems use reverse osmosis (RO), however the managing NGOs have found the systems to be cost prohibitive in off-grid locations. EDR has the potential to be more cost effective than currently installed village-scale RO systems in off-grid locations due to the lower specific energy consumption of EDR versus RO at high recovery ratios. This leads to lower power system cost and overall capital expense.

The system developed in this study is designed to validate whether the system requirements can be met in terms of recovery ratio, product water quality, specific energy consumption, and expected capital cost. The system is designed to desalinate 3600 ppm brackish groundwater to 350 ppm at a rate of 1.6 m3/hour and a recovery of 92%. This paper reviews the scope of the market for village scale desalination, existing groundwater salinity levels, and presents the design methodology and resulting system parameters for a village-scale PV-EDR field trial.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A044. doi:10.1115/DETC2015-46806.

Approximately 40% of the world’s population lives in energy poverty, lacking basic clean energy to prepare their food, heat water for washing, and provide light in their homes. Access to improved energy services can help to alleviate this poverty and result in significant improvements to health and livelihoods, yet past strategies for meeting the needs of this large and diverse population have often been top-down and focused on single intervention or solution, leading to limited success. Using a systems-based approach to examine residential thermal energy needs, this paper explores five intervention strategies to provide energy services for a remote off-grid village in Mali. The five intervention strategies are (1) general improved biomass cookstoves, (2) advanced biomass cookstoves, (3) communal biomass cookstoves, (4) LPG cookstoves, and (5) solar water heaters. Using a probabilistic multi-objective model that includes technical, environmental, economic, and social objectives, the potential net improvements, critical factors, and sensitivities are investigated. The results show that the factors with the most impact on the outcome of an intervention include the rate of user adoption, value of time, and biomass harvest renewability; in contrast, parameters such as cookstove emission factors have less impact on the outcome. This suggests that the focus of village energy research and development should shift to the design of technologies that have high user adoption rates. That is, the results of this study support the hypothesis that the most effective village energy strategy is one that reinforces the natural user-driven process to move toward efficient and convenient energy services.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A045. doi:10.1115/DETC2015-47270.

Over the past decade, a large amount of research has been dedicated to improving the efficiency and reducing the emissions of biomass cookstoves. The trade-off from placing such an emphasis on these two objectives is that improved cookstoves are often not as functional or desirable to the end user in comparison to their traditional cookstove. Thus, users often abandon their new improved cookstoves and sustained use is not achieved. In order for improved cookstoves to be more impactful, a different design approach is needed; improved cookstoves must be designed for usability, even at the expense of higher efficiencies or lower emissions. This paper explores the benefits of this alternative approach, which is demonstrated in the design of a replacement biomass cookstove for residents living in the Tambogrande region of Peru. The heavy use of biomass cookstoves in this small collection of villages, has resulted in many health and environmental problems for the residents. Recent field studies revealed that residents were pleased with the functionality of their traditional channel stove, yet also desired to have a stove that cooks faster, consumes less fuel, and emits less smoke. The resulting design includes a set of adaptable, inexpensive pot skirts that can be integrated with their current channel stove. These pot skirts allow for varying sizes and number of pots, as well as allow traditional fuels to be used. Despite a usability focused design approach, the pot skirts still improved the technical performance of the cookstove by improving thermal efficiency by 25.8%, decreasing time to boil by 26.0%, and decreasing fuel consumption by 24.7%. These results demonstrate that a usability focused design can still yield significant performance improvements while achieving a high level of user functionality.

Topics: Design
Commentary by Dr. Valentin Fuster
2015;():V02AT03A046. doi:10.1115/DETC2015-47613.

Desalination of high salinity water is an effective way of improving the aesthetic quality of drinking water and has been demonstrated to be a characteristic valued by consumers. Across India, 60% of the groundwater, the primary water source for millions, is brackish or contains a high salt content with total dissolved solids (TDS) ranging from 500 parts per million (ppm) to 3,000ppm. The government does not provide sufficient desalination treatment before the water reaches the tap of a consumer. Therefore consumers have turned to in-home desalination. However, current products are either expensive or have low recovery, product water output per untreated feed water, (∼30%) wasting water resources. Electrodialysis (ED) is a promising technology that desalinates water while maintaining higher recovery (up to 95%) compared to existing consumer reverse osmosis (RO) products. This paper first explores the in-home desalination market to determine critical design requirements for an in-home ED system. A model was then used to evaluate and optimize the performance of an ED stack at this scale and designated salinity range. Additionally, testing was conducted in order to validate the model and demonstrate feasibility. Finally, cost estimates of the proposed in-home ED system and product design concept are presented. The results of this work identified a system design that provides consumers with up to 80% recovery of feed water with cost and size competitive to currently available in-home RO products.

Topics: Cities , Water
Commentary by Dr. Valentin Fuster
2015;():V02AT03A047. doi:10.1115/DETC2015-48033.

This study evaluates options for biomass pellet formulations and business models to create a sustainable energy solution for cooking energy in Southern Africa. Various agricultural wastes and agro-processing wastes are investigated to meet industry standards on biomass pellet quality. These fuels are obtained from farms and facilities across a geographic area that affects the end-cost of the pellet through transportation costs and the cost of the biomass. The technical performance of the pellet and cost of the pellet are first contrasted and then optimized in unison to develop sustainable energy options that can provide year-round clean energy for household cooking and heating needs. A market was analyzed using wheat, sugarcane and maize crops as components for the biomass pellet fuel source in the Zululand district of South Africa. Using a target moisture content (MCtarget) of 8–10%, a target lower heating value (LHVtarget) greater than 16.0 MJ/kg and a target percent ash (Ashtarget) less than 3%, pellet metrics were optimized. The cost of the crops for the pellets was dependent upon the amount of each biomass used to make up the composition of the pellet. The production demand was then analyzed based on the most current consumer cooking fuel demand within South Africa. The production model was evaluated for three factory sizes; small (1hr/ton), medium (3hr/ton), and large (5hr/ton). Primary shipping cost is based on factory location and has a major impact on the cost of the pellet for the consumer as well as the availability of the supply. Factory location was analyzed by varying the biomass crop distance to the factory. Several business models are evaluated within this study to show which representation results in a high quality pellet of low cost to consumer. The study suggests the pellet be composed of 44.62% sugarcane, 47.49% maize, and 0.82% wheat resulting in a LHV of 16.00 MJ/kg, a MC of 8 (w/w%), and an ash content of 3 (w/w%). The optimal cost of the biomass fuel pellet for the consumer ranged from 172.77US$/ton to 185.03 US$/ton.

Topics: Biomass
Commentary by Dr. Valentin Fuster

41st Design Automation Conference: Design of Complex Systems

2015;():V02AT03A048. doi:10.1115/DETC2015-46087.

The behavior of large networked systems with underlying complex nonlinear dynamic are hard to predict. With increasing number of states, the problem becomes even harder. Quantifying uncertainty in such systems by conventional methods requires high computational time and the accuracy obtained in estimating the state variables can also be low. This paper presents a novel computational Uncertainty Quantifying (UQ) method for complex networked systems. Our approach is to represent the complex systems as networks (graphs) whose nodes represent the dynamical units, and whose links stand for the interactions between them. First, we apply Non-negative Matrix Factorization (NMF) based decomposition method to partition the domain of the dynamical system into clusters, such that the inter-cluster interaction is minimized and the intra-cluster interaction is maximized. The decomposition method takes into account the dynamics of individual nodes to perform system decomposition. Initial validation results on two well-known dynamical systems have been performed. The validation results show that uncertainty propagation error quantified by RMS errors obtained through our algorithms are competitive or often better, compared to existing methods.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A049. doi:10.1115/DETC2015-46107.

Identifying high-performance, system-level microgrid designs is a significant challenge due to the overwhelming array of possible configurations. Uncertainty relating to loads, utility outages, renewable generation, and fossil generator reliability further complicates this design problem. In this paper, the performance of a candidate microgrid design is assessed by running a discrete event simulation that includes extended, unplanned utility outages during which microgrid performance statistics are computed. Uncertainty is addressed by simulating long operating times and computing average performance over many stochastic outage scenarios. Classifier-guided sampling, a Bayesian classifier-based optimization algorithm for computationally expensive design problems, is used to search and identify configurations that result in reduced average load not served while not exceeding a predetermined microgrid construction cost. The city of Hoboken, NJ, which sustained a severe outage following Hurricane Sandy in October, 2012, is used as an example of a location in which a well-designed microgrid could be of great benefit during an extended, unplanned utility outage. The optimization results illuminate design trends and provide insights into the traits of high-performance configurations.

Topics: Design , Microgrids
Commentary by Dr. Valentin Fuster
2015;():V02AT03A050. doi:10.1115/DETC2015-46270.

Risk analysis in engineering design is of paramount importance when developing complex systems or upgrading existing systems. In many complex systems, new generations of systems are expected to have decreased risk and increased reliability when compared with previous designs. For instance, within the American civilian nuclear power industry, the Nuclear Regulatory Commission (NRC) has progressively increased requirements for reliability and driven down the chance of radiological release beyond the plant site boundary. However, many ongoing complex system design efforts analyze risk after early major architecture decisions have been made. One promising method of bringing risk considerations earlier into the conceptual stages of the complex system design process is functional failure modeling. Function Failure Identification and Propagation (FFIP) and related methods began the push toward assessing risk using the functional modeling taxonomy. This paper advances the Dedicated Failure Flow Arrestor Function (DFFAF) method which incorporates dedicated Arrestor Functions (AFs) whose purpose is to stop failure flows from propagating along uncoupled failure flow pathways, as defined by Uncoupled Failure Flow State Reasoner (UFFSR). By doing this, DFFAF provides a new tool to the functional failure modeling toolbox for complex system engineers. This paper introduces DFFAF and provides an illustrative simplified civilian Pressurized Water Reactor (PWR) nuclear power plant case study.

Commentary by Dr. Valentin Fuster
2015;():V02AT03A051. doi:10.1115/DETC2015-46400.

Current methods of functional failure risk analysis do not facilitate explicit modeling of systems equipped with Prognostics and Health Management (PHM) hardware. As PHM systems continue to grow in application and popularity within major complex systems industries (e.g. aerospace, automotive, civilian nuclear power plants), implementation of PHM modeling within the functional failure modeling methodologies will become useful for the early phases of complex system design and for analysis of existing complex systems. Functional failure modeling methods have been developed in recent years to assess risk in the early phases of complex system design. However, the methods of functional modeling have yet to include an explicit method for analyzing the effects of PHM systems on system failure probabilities. It is common practice within the systems health monitoring industry to design the PHM subsystems during the later stages of system design — typically after most major system architecture decisions have been made. This practice lends itself to the omission of considering PHM effects on the system during the early stages of design. This paper proposes a new method for analyzing PHM subsystems’ contribution to risk reduction in the early stages of complex system design. The Prognostic Systems Variable Configuration Comparison (PSVCC) eight-step method developed here expands upon existing methods of functional failure modeling by explicitly representing PHM subsystems. A generic pressurized water nuclear reactor primary coolant loop system is presented as a case study to illustrate the proposed method. The success of the proposed method promises more accurate modeling of complex systems equipped with PHM subsystems in the early phases of design.

Topics: Design , Modeling , Failure
Commentary by Dr. Valentin Fuster
2015;():V02AT03A052. doi:10.1115/DETC2015-47670.

This research concerns the packing problem frequently encountered in engineering design, where the volume and weight of a number of structural components such as valves and plumbing lines need to be minimized. Since in real applications the constraints are usually complex, the formulation of computationally tractable optimization becomes challenging. In this research, we propose a novel multiobjective simulated annealing (MOSA) approach towards the design optimization, i.e., optimizing the placement of valves under prescribed constraints to minimize the volume occupied, and the estimated plumbing line length. The objectives and constraints are described by analytical expressions. Our case study indicates that the new MOSA algorithm has relatively better performance towards 3D packing with strong constraints and the design can indeed be automated. The outcome of this research may benefit both existing manufacturing practice and future additive manufacturing.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In