0

19th International Conference on Design Theory and Methodology

2007;():3-15. doi:10.1115/DETC2007-34848.

Design and development of a product requires considering different aspects of the product through coordination, negotiation, and discussion in a collaborative environment. Each participant plays a role as a stakeholder, generating information from his/her viewpoints or perspectives, which influence the design through his/her design decisions. Collaboration is essential in a design process to avoid decision making mistakes, to shorten design time, and to improve design quality. Specific focuses on design collaboration in this paper are: (i) Modeling collaborative design process; and (ii) Implementing design system to support real-time and synchronized group design activities.

Commentary by Dr. Valentin Fuster
2007;():17-24. doi:10.1115/DETC2007-35360.

Identifying and transferring secrets of engineering design drive innovation within a successful company. In design courses, engineering students rarely use a lab book, a research notebook, or design journal to document anything! During the summer of 2006, a University of Maryland RISE undergraduate research team piloted a study of 12 students’ design journal entries during a Mechanical Engineering senior Capstone design course. Existing note coding schemes from the engineering education researchers were adapted and tested with the goal of inferring cognitive activity. Journal entries revealed individual characteristics about students as learners including: uneven time commitment to design stages, preference for sketching, documentation clarity and individual buy-in to design tools presented in class. Design journal research is a promising path to understanding how students are learning and practicing design.

Commentary by Dr. Valentin Fuster
2007;():25-34. doi:10.1115/DETC2007-35549.

Recent cyberinfrastructure initiatives seek to create ubiquitous, comprehensive, interactive, and functionally complete digital environments that consist of people, data, information, tools, and instruments for research communities. With product dissection as our unifying theme, we are forging a cyberinfrastructure to support undergraduate design engineering education through CIBER-U: Cyber-Infrastructure-Based Engineering Repositories for Undergraduates. CIBER-U pairs two of the nation’s leading design repository developers with several active users and their students to realize a high-impact application of cyberinfrastructure in engineering undergraduate curricula involving freshmen through seniors. Specifically, CIBER-U combines product dissection activities at three universities with two digital design repositories, CAD modeling and animation, video, MediaWiki technology, multimedia, and undergraduate summer research experiences to enable cyberinfrastructure-based product dissection activities. Nearly 700 students have participated in the Phase I efforts of CIBER-U, which have focused primarily on generating, capturing, and storing data in two digital design repositories. Lessons learned from these efforts are presented from the students’ perspectives as well as that of the faculty in both engineering and computer science. The implications for implementing CIBER-U on a national scale are discussed along with ongoing research.

Commentary by Dr. Valentin Fuster
2007;():35-44. doi:10.1115/DETC2007-35710.

Engineering of complex systems often involves teamwork. The team members must work together to identify requirements, explore design spaces, generate design alternatives, and make both interactive and joint design decisions. Due to the latency of information and the disciplinary differences, it is often a difficult process for the team members to reach agreements when needed. Negotiation has been studied as a method for facilitating information exchange, mutual understanding, and joint decision-making. In our previous work, we introduced an argumentation-based negotiation framework to support collaborative design. In this paper, we present an experiment study that was conducted to assess the impact of this negotiation support approach on the process and the outcome of collaborative design. The results of the experiment have shown both the positive effects and limitations of the approach.

Topics: Design , Negotiation
Commentary by Dr. Valentin Fuster
2007;():45-54. doi:10.1115/DETC2007-35832.

In this paper we explore the use of text processing technology for on-line assessment in an engineering design project class. We present results from a 5-week classroom study in a capstone engineering design course in which we explore the potential benefits of such technology for student learning in this context. Furthermore, we present results from ongoing work assessing student productivity based on features extracted from their conversational behavior in the course discussion board. While we found that typical shallow productivity measures such as number of posts, length of posts, or number of files committed have no correlation with an instructor assigned grade, we can achieve a substantial improvement using simple linguistic patterns extracted from on-line conversational behavior.

Commentary by Dr. Valentin Fuster
2007;():55-67. doi:10.1115/DETC2007-35378.

The purpose of this study is to analyze team interaction and team creativity performance for conceptual design activities of student design teams that are composed using personal creativity modes. Experimental group design teams conducted a conceptual design task after teamwork practice activity immediately following self-awareness activity for personal creativity modes. Their design results have been evaluated using novelty and resolution aspects of the Creative Product Semantic Scale. The result shows that the teams in experimental group acquired higher score than those in control group without teamwork practice activity. Also we conducted detailed team interaction analysis of protocol data using Interaction Process Analysis method for a diverse team composed of various creativity modes and a uniform team composed of the same creativity mode. The analysis result of team interaction shows that the interaction patterns of team members in diverse team were very different from individual to individual reflecting their personal creativity modes, while those of uniform team members were almost identical. These findings suggest that knowing team members’ personal creativity modes could improve team creativity and that personal creativity modes could affect the way design team interacts.

Commentary by Dr. Valentin Fuster
2007;():69-78. doi:10.1115/DETC2007-35395.

The duality between biological systems and engineering systems exists in the pursuit of economical and efficient solutions. By adapting biological design principles, nature’s technology can be harnessed. In this paper, we present a systematic method for reverse engineering biological systems to assist the designer in searching for solutions in nature to current engineering problems. Specifically, we present methods for decomposing the physical and functional biological architectures, representing dynamic functions, and abstracting biological design principles to guide conceptual design. We illustrate this method with an example of the design of a variable stiffness skin for a morphable airplane wing based on the mutable connective tissue of the sea cucumber.

Commentary by Dr. Valentin Fuster
2007;():79-92. doi:10.1115/DETC2007-35604.

The natural world provides numerous cases for analogy and inspiration. From simple cases such as hook and latch attachments to articulated-wing flying vehicles, nature provides many sources for ideas. Though biological systems provide a wealth of elegant and ingenious approaches to problem solving, there are challenges that prevent designers from leveraging the full insight of the biological world into the designed world. This paper describes how those challenges can be overcome through functional analogy. Through the creation of a function-based repository, designers can find biomimetic solutions by searching the function for which a solution is needed. A biomimetic function-based repository enables learning, practicing and researching designers to fully leverage the elegance and insight of the natural world. In this paper, we present the initial efforts of functional modeling natural systems and then transferring the principles of the natural system to an engineered system. Four case studies are presented in this paper. These case studies include a biological solution to a problem found in nature and engineered solutions corresponding to the high level functionality of the biological solution, i.e., a fly’s winged flight and a flapping wing aircraft. The case studies show that unique, creative engineered solutions can be generated through functional analogy with nature.

Commentary by Dr. Valentin Fuster
2007;():93-102. doi:10.1115/DETC2007-35662.

This paper presents a novel approach for the design synthesis of continuous inhomogeneous structures. The objective of this research is to mimic biological principles of growth and evolution in order to explore a set of novel design configurations identified by high complexity both in topology and mechanical properties. The ability to synthesize novel structures is explored from an engineering point view, where the use of inhomogeneous properties can increase the ability of a structure to support external loads and minimize weight. Based on the observation that biological structures are inhomogeneous, in the sense that different cells have different properties, an artificial environment has been created which models the biological growth procedure with cells that serve as building blocks of the structure. Cell differentiation is expressed only in the sense of mechanical properties. Each cell contains an identical artificial DNA sequence which is executed during the growth procedure and stops once the structure meets desired engineering requirements, such as supporting loads. The DNA contains sets of rules which are encoded as a gene string. A relatively simple DNA sequence can give rise to complex inhomogeneous structures; small changes in the rules can lead to a significantly different structures with different properties. The representation of these rules is ideally suited for evolution, which will be applied in the future to evolve rule-sets that grow and develop high-performance inhomogeneous structures.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():103-113. doi:10.1115/DETC2007-35776.

This work explores the representation of biological phenomena as stimuli to designers for biomimetic design. We describe a study where participants were asked to solve a micro-assembly problem given a set of biological representations of leaf abscission for inspiration. The visual aids presented to the designers are investigated, and the use of functional models of biological phenomena in particular is critiqued. The designs resulting from the study are classified and theories drawn as to possible influences of the biological representations. Observations, retrospective conversations with participants, and analogical reasoning classifications are used to determine positive qualities as well as areas for improvement in representation of the biological domain. Findings suggest that designers need an explicit list of all possible inherent biological strategies, previously extracted using function structures with objective graph grammar rules. Challenges specific to this type of study are discussed, and possible improvement of representative aids are outlined.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():115-124. doi:10.1115/DETC2007-34240.

There is growing psychological research linking affect to the content and process of thinking. This paper deals with one aspect of affect and social cognition, the interaction of affect and shared understanding. It is theorized that affect may have cognitive processing consequences for shared understanding in design. In order to investigate this question, this paper develops a research method that brings together theories and instruments from cognitive science, linguistics, and design studies to study the link between affect and shared understanding in design. First, the paper reviews a framework for analyzing the process of creating shared understanding. Second, the paper presents a linguistic framework and analysis technique for extracting affective content from language based on the explicit, conscious expression of affect through favorable and unfavorable attitudes towards specific subjects. Third, the paper proposes a model of shared understanding that is interdependent, in part, with affective processing. The linguistic analysis and shared understanding analysis framework are applied to a transcript of collaborative design to illustrate how the affective content of designers’ communication shifts design activities. We find that our research method allows affect to be observed concurrently with cognitive processing and that, owing to the motivational consequences of affect, produces an axis of evaluation that could shed light on how affect organizes and drives the outcomes of design thinking.

Commentary by Dr. Valentin Fuster
2007;():125-134. doi:10.1115/DETC2007-34874.

A metaphor allows us to understand one concept in terms of another, enriching our mental imagery and imbuing concepts with meaningful attributes. Metaphors are well studied in design, for example, in branding, communication and the design of computer interfaces. Less well appreciated is that our understanding of fundamental design concepts, including design itself, is metaphorical. When we treat design as a process of exploration or when we get together to “bounce ideas off each other” we understand the abstract concepts of design and ideas metaphorically; ideas don’t literally bounce, nor are we literally exploring when we design. Our research is a descriptive study of the metaphors employed in design. It is the first phase in a longer research effort to understand the impact of design metaphors on creativity. We investigated whether design authors employed different metaphors for the overall design process and consequently for core design concepts. To address this hypothesis we analyzed the language used in the concept generation chapters of nine widely used engineering design textbooks. We coded each metaphorical phrase, such as “finding another route to a solution”, and determined the core metaphors in use for common design concepts including, ideas, problems, solutions, concepts, design, the design process, user needs and others. We confirmed that authors with differing views of design do indeed emphasize different metaphors for core design concepts. We close by discussing the implications of some common metaphors, in particular that Ideas Are Physical Objects.

Commentary by Dr. Valentin Fuster
2007;():135-144. doi:10.1115/DETC2007-34903.

The objective of this paper is to present a series of proposed cognitive models for specific components of design ideation. Each model attempts to explain specific cognitive processes occurring during ideation. Every model presented here is constructed with elements (i.e. cognitive processes) and theories available from cognitive psychology, human problem solving, mental imagery, and visual thinking. Every model in turn is an element of a higher-level cognitive model of design ideation. These models provide a better understanding of the components involved during ideation and their relationships.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():145-159. doi:10.1115/DETC2007-34948.

Design by analogy is a noted approach for conceptual design. This paper seeks to develop a robust design-by-analogy method. This endeavor is sought through a series of three experiments focusing on understanding the influence of representation on the design-by-analogy process. The first two experiments evaluate the effects of analogous product description—presented in either domain-general or domain-specific language—on a designer’s ability to later use the product to solve a novel design problem. Six different design problems with corresponding analogous products are evaluated. The third experiment in the series uses a factorial design to explore the effects of the representation (domain specific or general sentinel descriptions) for both the design problem and the analogous product on the designer’s ability to develop solutions to novel design problems. Results show that a more general representation of the analogous products facilitates later use for a novel design problem. The highest rates of success occur when design problems are presented in domain specific representations and the analogous product is in a domain general representation. Other insights for the development of design by analogy methods and tools are also discussed.

Topics: Design , Innovation
Commentary by Dr. Valentin Fuster
2007;():161-172. doi:10.1115/DETC2007-35772.

Natural language, which is closely linked to thought and reasoning, has been recognized as important to the design process. However, there is little work specifically on understanding the use of language as design stimuli. This paper presents the results of an experiment where verbal protocols were used to elicit information on how designers used semantic stimuli presented as words related to the problem during concept generation. We examined stimulus use at the word level with respect to part-of-speech classes, e.g., verbs, nouns and noun modifiers, and also how stimuli syntactically relate to other words and phrases that represent ideas produced by the participant. While all stimuli were provided in verb form, we found that participants often used stimuli in noun form, but that more new ideas were introduced while using stimuli as verbs and noun modifiers. Frequent use of stimuli in noun form appears to confirm that people tend to think in terms of objects. However, noun use of stimuli introduced fewer new ideas and therefore contributed less to concept formation in our study. This work highlights a possible gap between how people may tend to think, e.g., in terms of nouns, and how new ideas may be more frequently introduced e.g., through verbs and noun modifiers. Addressing this gap may enable development of a language-based concept generation support system to encourage innovative and creative solutions for engineering problems.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():173-182. doi:10.1115/DETC2007-34364.

Comparison and ranking of solutions are central tasks of the design process. Designers have to deal with decisions simultaneously involving multiple criteria. Those criteria are often inconsistent in the sense that they are expressed according to different types of metrics. This means that usual engineering performance indicators are expressed according to physical quantities (i.e. SI system) and indicators such as preference functions can be “measured” by using other type of qualitative metrics. This aspect limits the scientific consistency of design because a coherent scientific framework will at first require the creation of a unified list of fundamental properties. A combined analysis of the measurement theory, the General Design Theory (GDT) and the dimensional analysis theory give an interesting insight in order to create guidelines for establishing a coherent measurement system. This article establishes a list of fundamental requirements. We expect that these guidelines can help engineers and designers to be more aware of the drawbacks linked with the use of wrong comparison procedures and limitations associated with the use of weak measurement scales. This article makes an analysis of the fundamental aspects available in major scientific publications related to comparison, provides a synthesis of these basic concepts and unifies those concepts together from a designing perspective. A practical design methodology using the fundamental results of this article as prerequisites has been implemented by the authors.

Commentary by Dr. Valentin Fuster
2007;():183-192. doi:10.1115/DETC2007-34652.

Understanding how and why changes propagate during engineering design is critical because most products and systems emerge from predecessors and not through clean sheet design. This paper applies change propagation analysis methods and extends prior reasoning through examination of a large data set from industry including 41,500 change requests, spanning 8 years during the design of a complex sensor system. Different methods are used to analyze the data and the results are compared to each other and evaluated in the context of previous findings. In particular the networks of connected parent, child and sibling changes are resolved over time and mapped to 46 subsystem areas. A normalized change propagation index (CPI) is then developed, showing the relative strength of each area on the absorber-multiplier spectrum between −1 and +1. Multipliers send out more changes than they receive and are good candidates for more focused change management. Another interesting finding is the quantitative confirmation of the “ripple” change pattern. Unlike the earlier prediction, however, it was found that the peak of cyclical change activity occurred late in the program driven by systems integration and functional testing. Patterns emerged from the data and offer clear implications for technical change management approaches in system design.

Commentary by Dr. Valentin Fuster
2007;():193-203. doi:10.1115/DETC2007-34758.

This paper evaluates a method known as Pugh Controlled Convergence and its relationship to recent developments in design theory. Computer executable models are proposed simulating a team of people involved in iterated cycles of evaluation, ideation, and investigation. The models suggest that: 1) convergence of the set of design concepts is facilitated by the selection of a strong datum concept; 2) iterated use of an evaluation matrix can facilitate convergence of expert opinion, especially if used to plan investigations conducted between matrix runs; and 3) ideation stimulated by the Pugh matrices can provide large benefits both by improving the set of alternatives and by facilitating convergence. As a basis of comparison, alternatives to Pugh’s methods were assessed such as using a single summary criterion or using a Borda count. The models we developed suggest that Pugh’s method, under a substantial range of assumptions, results in better design outcomes than those from these alternative procedures.

Commentary by Dr. Valentin Fuster
2007;():205-216. doi:10.1115/DETC2007-35073.

Selecting optimum concepts for a system and its subsystems in the conceptual design stage involves uncertainties due to imperfect information about customer preferences (market shares), cost of the system developed from each concept, and feasibility of new technology used in the new system. When analytical relationships between system performance and system inputs or parameters are unknown in the early system development stage, one approach to quantify the goodness of a concept is to use rating scales. This paper studies the effects of variations (precisions) in rating scales and in cost estimation for evaluating the goodness of system module concepts (e.g., sub-systems, assemblies, subassemblies, and parts). This paper presents a global sensitivity analysis (GSA) in perceptual concept evaluation, three probability measures for evaluating and selecting optimum concepts in GSA, and one-set-of-factors-at-a-time GSA to identify the sets of factors that cause significant variations in concept evaluation outcomes.

Commentary by Dr. Valentin Fuster
2007;():217-225. doi:10.1115/DETC2007-35123.

Several tools and methods drawn from research in design and manufacturing have been successfully transplanted into industry. This paper describes a survey conducted of practicing designers and engineers in fields ranging from product development to aerospace to better understand the methodologies and metrics they employ. Preliminary results suggest that respondents found methodologies such as need finding, storyboarding, and brainstorming useful, but were less familiar with approaches such as systematic design, axiomatic design, or TRIZ. There was wide variation in what respondents felt were appropriate design outcome measures, but positive customer feedback in particular was deemed important in evaluating project performance. Finally, a broad range of design tools was cited as useful by respondents, but computer aided drawing applications in particular were rated highly. It is hoped that this survey can be of value in formulating future research goals in design theory and tool development.

Commentary by Dr. Valentin Fuster
2007;():227-236. doi:10.1115/DETC2007-34232.

Functional reasoning is regarded as an important asset to the engineering designers’ conceptual toolkit. Yet despite the value of functional reasoning for engineering design, a consensus view is lacking and several distinct proposals have been formulated. In this paper some of the main models for functional reasoning that are currently in use or discussed in engineering are surveyed and some of their differences clarified. The models included the Functional Basis approach by Stone and Wood [1], the Function Behavior State approach by Umeda et al. [2, 3, 4], and the Functional Reasoning approach of Chakrabarti and Bligh [5, 6]. This paper explicates differences between these approaches relating to: (1) representations of function and how they are influenced by design aims and form solutions, and (2) functional decomposition strategies, taken as the reasoning from overall artifact functions to sub-functions, and how these decomposition strategies are influenced by the use of existing engineering design knowledge bases.

Commentary by Dr. Valentin Fuster
2007;():237-247. doi:10.1115/DETC2007-35446.

Significant expenditure and effort is devoted to the never ending search for reduced product development lifecycle time and increased efficiency. The development of Semantic Web technologies promises a future where knowledge interchange is done seamlessly in open distributed environments. This paper illustrates how Semantic Web technologies in their current state of development can be effectively used to deploy an infrastructure supporting innovation principles and the engineering design processes. A mechanical design was chosen to model the initial phase of a design project using semantic ontologies. This included a set of design requirements, creating a functional model, and making use of the Theory of Inventive Problem Solving (TIPS). The ontology development strategy is built on a combination of larger domain knowledge ontologies and simple process ontologies. Linked user requirements, engineering design, and functional modeling ontologies facilitated the application of TIPS through a set of semantic rules to generate design recommendations. The developed semantic knowledge structure exemplifies a practical implementation of a functional model which served as a record of the design process and as a platform from which to gain additional usefulness out of the stored information.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():249-261. doi:10.1115/DETC2007-35583.

This paper builds on previous concept generation techniques explored at the University of Missouri - Rolla and presents an interactive concept generation tool aimed specifically at the early concept generation phase of the design process. Research into automated concept generation design theories led to the creation of two distinct design tools: an automated morphological search that presents a designer with a static matrix of solutions that solve the desired input functionality and a computational concept generation algorithm that presents a designer with a static list of compatible component chains that solve the desired input functionality. The merger of both the automated morphological matrix and concept generation algorithm yields an interactive concept generator that allows the user to select specific solution components while receiving instantaneous feedback on component compatibility. The research presented evaluates the conceptual results from the hybrid morphological matrix approach and compares interactively constructed solutions to those returned by the non-interactive automated morphological matrix generator using a dog food sample packet counter as a case study.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():263-275. doi:10.1115/DETC2007-35620.

Conceptual design is a vital stage in the development of any product, and its importance only increases with the complexity of a design. Functional modeling with the Functional Basis provides a framework for the conceptual design of electromechanical products. This framework is just as applicable to the conceptual design of automated solutions where an engineered product with components spanning multiple engineering domains is designed to replace or aid a human and his or her tools in a human-centric process. This paper presents research toward the simplification of the generation of conceptual functional models for automation solutions. The presented methodology involves the creation of functional and process models to fully explore existing human operated tasks for potential automation. Generated functional and process models are strategically combined to create a new conceptual functional model for an automation solution to potentially automate the human-centric task. The presented methodology is applied to the generation of a functional model for a conceptual automation solution. Then conceptual automation solutions generated through the presented methodology are compared to existing automation solutions to demonstrate the effectiveness of the presented methodology.

Commentary by Dr. Valentin Fuster
2007;():277-287. doi:10.1115/DETC2007-34526.

The theory of affordances has been adapted by the authors into a high-level approach to design known as affordance based design. One of the features that distinguishes the affordance based approach from function based approaches is that affordances are form dependent whereas functions are form independent. While delaying consideration of form can help maintain design freedom, considering the structure of multiple concept solutions early in the design process can preserve design freedom while allowing the designer to manipulate and refine concept structures and make prototypes early in the design process. In this paper we present a tool, the affordance-structure matrix, that aids the designer in mapping artifact structures to positive and negative affordances for the project. The affordance structure matrix can be used as an attention directing tool, focusing on the correlations within an individual concept architecture, or as a concept exploration tool, comparing the affordance-structure linkages across multiple concepts. The use of the affordance structure matrix is demonstrated using a case study examining two concept architectures for a household vacuum cleaner. The features of the affordance structure matrix are also contrasted with other existing matrix based tools for engineering design.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():289-296. doi:10.1115/DETC2007-35302.

An experiment was conducted to investigate the effectiveness of empathic lead user analysis for uncovering latent customer needs that could lead to breakthrough product ideas. Empathic lead users are defined as ordinary customers (or designers) who are transformed into lead users by experiencing the product in radically new ways, via extraordinary user experiences. These extraordinary experiences may include modifications of the usage environment or the way in which the customer interacts with the product. A procedure for designing and conducting empathic lead user interviews is introduced in this paper. Results are reported for a trial study in which the empathic lead user technique is compared with verbal and articulated use interviews for a common consumer product (a two-person tent). Empathic lead user interviews are observed to have a significantly positive effect on latent needs discovery in the trial study, leading to a five-fold increase in latent needs discovery relative to articulated use interviews with a prototype and a twenty-fold increase relative to verbal interviews without a prototype. Empathic lead user interviews emerge as a promising tool for supporting innovation and breakthrough concept generation.

Commentary by Dr. Valentin Fuster
2007;():297-306. doi:10.1115/DETC2007-35455.

During the formative stage of the design cycle, teams engage in a variety of tasks to arrive at a design, including selecting among design alternatives. A key notion in design alternative selection is that of “preference” in which a designer assigns priorities to a set of design choices. This paper presents a preliminary approach for extracting a projection of aggregated design team preferences from design team discussion. This approach further takes into consideration how the design preferences of a team can evolve over time as the team changes its priorities based on new design information. Two initial models are given for representing the most probable and preferred design alternative from the transcriptions of design team discussion, and for predicting how preferences might change from one time interval to the next. These models are applied to an illustrative, real-world case example.

Topics: Design , Teams
Commentary by Dr. Valentin Fuster
2007;():307-317. doi:10.1115/DETC2007-35675.

This paper describes an interpretation of design activity through investigating design questions. From a number of previous studies two types of question have been identified: 1) reasoning questions; and 2) strategic questions. Strategic questions are part of an experienced designers approach to solving a design task. The paper describes how designers progress their tasks by asking questions at both reasoning and strategic levels. Transcripts of protocol analysis have been examined to identify both reasoning questions and strategic questions. These are discussed together with their relation to a problem solving model. The research aims to contribute to an understanding of design activity.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():319-328. doi:10.1115/DETC2007-35864.

Interior design of space is somewhat different from product design in view of followings: the space should afford the multiple users at the same time and afford appropriate interactions with human and objects which exist inside the space. This paper presents a case study of interior design of a conference room based on affordance concept. We analyzed all of users’ tasks in a conference room based on the human activities that are divided into human-object and human-human interactions. Function decomposition of an every object in conference room was conducted. The concept of a high-level function is used such as “configure the space” to satisfy the given condition of the number of humans, the types of conference, and so forth. The Function-Task Interaction (FTI) method was enhanced to analyze the interactions between functions and user tasks. Many low-level affordances were extracted, and high-level affordances such as enter/exitability, prepare-ability, present-ability, discuss-ability and conclude-ability were also extracted by grouping low-level affordances in the enhanced FTI matrix. In addition, the benchmarking simulation was conducted for several existing conference rooms and the results confirmed that the extracted affordances can be used for checklist and also for good guidance on interior design process.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():329-342. doi:10.1115/DETC2007-34761.

This paper reports on an exploratory study of how the architecture of a software product evolves over time. Because software is embedded in many of today’s complex products, and it is prone to relatively rapid change, it is instructive to study software architecture evolution for general insights into product design. We use metrics to capture the intrinsic complexity of software architectures as they evolve through successive generations (version releases). We introduce a set of product representations and metrics that take into account two important features used to manage the complexity in software products: layers and modules. We also capture organizational data associated with the product under development. We propose a three-step approach for the analysis and illustrate it using successive versions of an open source product, Ant. One of our findings is that software architectures seem to evolve in a non-linear manner similar to the S-shaped curve that characterizes technology evolution at the industry level. We also find several parallel patterns among architectural and organizational dynamics. Implications for research and practice are discussed.

Commentary by Dr. Valentin Fuster
2007;():343-352. doi:10.1115/DETC2007-34871.

The central role of modularity is becoming more and more apparent in design of complex products and systems. The question frequently arises how modularity can be measured. To better understand the degree of modularity, we developed two metrics based on a design structure matrix (DSM). The non-zero fraction (NZF) captures the coupling density of interconnections between components, while the singular value modularity index (SMI) measures the degree of modularity. Both metrics yield values between 0 and 1. These metrics are applied to 15 systems and products. We show that real products typically have NZF values between 0.05 and 0.4 and an SMI between 0.05 (very integral) and 0.95 (very modular). A randomly generated DSM population of equal size and density exhibits SMI values that are bounded in the range from 0.25 to 0.45. We conclude that neither a high degree of modularity nor strong integrality occurs accidentally; but are the result of deliberate design. In particular, we show a more integral design will emerge if a functionally-equivalent product is designed to be portable. The main advantage of SMI is that it enables analysis of the degree of modularity of any product or system independent of subjective module choices.

Topics: Density
Commentary by Dr. Valentin Fuster
2007;():353-361. doi:10.1115/DETC2007-34885.

As manufacturers are forced by today’s marketplace to provide nearly customized products to satisfy individual customer requirements and simultaneously achieve economies of scale during production, product family design and platform-based product development have garnered their attention. Determining which elements (attributes, functions, components, etc.) should be made common, variable, or unique, across a product family is the critical step in the successful implementation of product families and product platforms. Therefore, the inherent challenge in product family design is to balance the tradeoff between product commonality (how well the components and functions can be reused across a product family) and variety (the range of different products in a product family). There are opportunities to develop tools to directly aid in addressing the commonality/variety tradeoff at the product family planning stage in a way that supports the engineering design process. In this paper, we develop a matrix-based, qualitative design tool – the Attribute-Based Clustering Methodology (ABCM) that enables the design of product families to better satisfy the ideal commonality/variety tradeoff as determined by a company’s competitive focus. The ABCM is used to identify component commonality opportunities in product families without sacrificing product variety by analyzing product attributes across the product family. This paper focuses on the ABCM as used in new product family design and how the ABCM can be used to cluster product attributes into potential modules and product platforms. It is intended as a starting place, an opening set of questions, and as a framework for the general solution to the problem of a qualitative design tool for product family design that directly address the commonality/variety tradeoff. Development of the ABCM starts with the classification of existing product attributes into three categories: common, unique, and variable. The attributes are then clustered into platforms and differentiating modules based on their occurrences, target value ranges (partitioning the target values for each product attribute into achievable ranges), and the manner in which the range changes across the entire target market segments. The ABCM can be used as a qualitative guideline in product family design. In new product family design, it can be used to identify which elements (functions and components) should be clustered into a common platform and which should be clustered into differentiating modules based on an analysis of the product attributes, their occurrences, and their target values across the product family. In product family redesign, the ABCM can be used to identify any elements that are inappropriately included in a platform or inappropriately clustered into differentiating modules by comparing the ideal clustering with the actual clustering.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():363-371. doi:10.1115/DETC2007-35880.

Since parts and systems are closely linked to each other in complex engineering products, a change in a single part or system causes changes in other parts or system, which in turn propagates further. This study aims to propose a novel approach to analysis of design change impacts in modular products with consideration of change propagation. The proposed approach measures the relative change impact (RCI) of each part or module on the whole product using analytic network process (ANP). A case study on the automobile system is presented to illustrate the proposed approach. The practical use of RCI is also discussed.

Topics: Design , Networks
Commentary by Dr. Valentin Fuster
2007;():373-384. doi:10.1115/DETC2007-35915.

The paper proposes an intermediate modeling ground that bridges the gap between engineering design models and marketing models for the development of platform-based product families. In this model, each variant in the product family is assumed to contribute a percentage to the overall market coverage inside a target market segment and we wish to maximize this coverage subject to an available development budget. Following the conceptual engineering design phase of the product family, this model will optimize the initial investment in the platform, the commonality level between variants, and the number of variants to be produced in order to maximize market coverage. An application of the model using a Drill product family is included to demonstrate the usefulness of the proposed model.

Commentary by Dr. Valentin Fuster
2007;():385-394. doi:10.1115/DETC2007-35386.

Avoiding product recalls and failures is a must for companies to remain successful in the consumer product industry. Large numbers of failed products result in significant profit losses do to repair or replacement costs as well as untraceable costs of reputation damage among customer bases. Probabilistic risk assessment (PRA) is key to preventing product failures. When risks are adequately identified and assessed the potential product failures can be mitigated and save lives as well as company profit. Risk mitigation is more effective the earlier it can be applied in the design process; therefore, the identification and assessment of risk through PRA techniques is most beneficial to the company when employed early in the design process. This paper presents new techniques for performing four common PRAs, preliminary hazards analysis (PHA), failure mode and effect analysis (FMEA), fault tree analysis (FTA), and event tree analysis (ETA), during the conceptual phase of design, when products have yet to assume a physical form. The backbone for the application of these PRA techniques during the conceptual design phase is the Risk in Early Design (RED) Method. RED generates a listing of potential product risk based on historical failure occurrences. These risks are categorized by function, which enables this preliminary risk assessment to be performed during conceptual design. A risk analysis is performed for a bicycle that demonstrates the powerful failure prevention ability of RED and PRA during conceptual product design with a Consumer Product Safety Commission recall.

Commentary by Dr. Valentin Fuster
2007;():395-405. doi:10.1115/DETC2007-35421.

In this paper, the Functional Failure Identification and Propagation (FFIP) framework is introduced as a novel approach for evaluating and assessing functional failure risk of physical systems during conceptual design. The task of FFIP is to estimate potential faults and their propagation paths under critical event scenarios. The framework is based on combining hierarchical system models of functionality and configuration, with behavioral simulation and qualitative reasoning. The main advantage of the method is that it allows the analysis of functional failures and fault propagation at a highly abstract system concept level before any potentially high-cost design commitments are made. As a result, it provides the designers and system engineers with a means of designing out functional failures where possible and designing in the capability to detect and mitigate failures early on in the design process. Application of the presented method to a fluidic system example demonstrates these capabilities.

Commentary by Dr. Valentin Fuster
2007;():407-420. doi:10.1115/DETC2007-35475.

When designing a product, the earlier the potential risks can be identified, the more costs can be saved, as it is easier to modify a design in its early stages. Several methods exist to analyze the risk in a system, but all require a mature design. However, by applying the concept of “common interfaces” to a functional model and utilizing a historical knowledge base, it is possible to analyze chains of failures during the conceptual phase of product design. This paper presents a method based on these “common interfaces” to be used in conjunction with other methods such as Risk in Early Design in order to allow a more complete risk analysis during the conceptual design phase. Finally, application of this method is demonstrated in a design setting by applying it to a thermal control subsystem.

Commentary by Dr. Valentin Fuster
2007;():421-431. doi:10.1115/DETC2007-35478.

Risk is becoming an important factor in facilitating the resource allocation in engineering design because of its essential role in evaluating functional reliability and mitigating system failures. In this work, we aim at expanding existing quantitative risk modeling methods to collaborative system designs regarding resource allocation in a distributed environment, where an overlapped risk item can affect multiple stakeholders, and correspondingly be examined by multiple evaluators simultaneously. Because of different perspectives and limited local information, various evaluators (responsible for same or different components of a system), though adopting the same risk definition and mathematical calculation, can still yield unsatisfying global results, such as inconsistent probability and/or confusing consequence evaluations, which can then cause potential barriers in achieving agreement or acceptable discrepancies among different evaluators involved in the collaborative system design. Built upon our existing work, a Risk-based Distributed Resource Allocation Methodology (R-DRAM) is developed to help system manager allocate limited resource to stakeholders, and further to components of the targeted system for the maximum global risk reduction. Besides probability and consequence, two additional risk properties, tolerance and hierarchy, are considered for comprehensive systematic risk design. Tolerance is introduced to indicate the effective risk reduction, and hierarchy is utilized to model the comprehensive risk hierarchy. Finally a theoretical framework based on cost-benefit measure is developed for resource allocation. A case study is demonstrated to show the implementation process. The preliminary investigation shows promise of the R-DRAM in facilitating risk-based resource allocation for collaborative system design using a systematic and quantifiable approach in distributed environment.

Commentary by Dr. Valentin Fuster
2007;():433-446. doi:10.1115/DETC2007-35673.

Understanding of the risk and reliability of systems can be enhanced by modeling the grayscale degradation of the performance of components and determining the grayscale impact on the system performance. Rather than producing an estimate of the probability of the system being in either the working or the failed state, as more traditional risk and reliability modeling does, this approach produces estimates of the probability of the system being in any of a continuous range of states between fully working and completely failed. In this paper, earlier work is extended by exploring the cause of major differences between this new approach and traditional reliability analysis, as well as by developing sensitivity analysis for grayscale reliability. Because the coupling effect can cause significant differences between this approach and the traditional approach, the coupling effect of component degradation is explored through the examples of coupled and decoupled mass-spring-damper systems. Also, a new sensitivity measure for grayscale reliability is developed to determine how designers can trade changes in reliability with other design criteria such as cost.

Commentary by Dr. Valentin Fuster
2007;():447-459. doi:10.1115/DETC2007-34876.

Products which transform to reveal new functionality have been a source of fascination and utility for ages. Such products — transformers — have been previously designed employing ad hoc creativity rather than by pursuing any formal design methodology. By incorporating a design methodology and a concept generation tool for transformers, this research not only unearths further utility for these innovative and revolutionary products, but also aids engineers in the design of these devices with dexterity. The success and advantages of transformers result from added functionality while simultaneously using fewer resources and occupying less space. This paper elucidates the foundation of a methodology for the design of such transforming devices. Our basic research on transforming systems involves a combined inductive and deductive approach, uncovering transformation design principles and a novel method for designing transforming products. In the early stages of design, this method employs a unique process to extract customer needs by examining the requirement hierarchy of product usage scenarios. Such an approach broadens the scope of design and aids in identifying opportunities for transforming products while developing process level insights and solutions catering to these needs. During the concept generation phase of design, the method exploits the transformation design principles as a novel tool to complement and expand contemporary concept generation techniques. A unique bicycle accessory which transforms from a lock to a pump and vice versa is provided as an example of the transformational design process.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():461-470. doi:10.1115/DETC2007-35121.

In designing products and product platforms, it is essential to consider the role of technology evolution to avoid frequent redesign costs or even premature obsolescence of key components. Taking this into account in multi-generational design is referred to as planned product innovation. The existing design tools/processes fail to delineate different technologies and therefore capitalize on ways of technological change within products. This paper provides a framework for technology change analysis by identifying underlying technologies and the potential for change in their intrinsic characteristics — performance level, principle of operation, and technology architecture. Measuring the three aspects of technological change separately and comparing them to their respective forecast information yields a technology’s potential for planned innovation. The technology change framework can be applied to an initial design of the product that is anticipated to undergo planned innovation. A detailed function-structure diagram and a component-based design structure matrix of the initial product design serve as inputs to the framework and result in technology change potentials for each technology. Grouping components with similar technology change potentials (in all three aspects) into independent clusters will allow organizations to focus their development efforts on clusters that are most ripe for innovation with minimal disruption to the rest of the product. For product platform identification, the technology change framework is used to develop a set of four heuristics to identify technology-based platform elements. The set of four heuristics require that the platform elements have a low potential for change in performance level, principle of operation, and technology architecture or have standardized interfaces. Having a technology focus for product platforms is necessary as forecasting and diffusion models/studies are available at a technology level rather than at an individual component level. Localization of any anticipated platform or family changes due to technology evolution (through platform formation) will minimize redesign changes and the cascade of disruptions in each variant. Future work will focus on developing step-by-step methods for technology-based clustering of products and integrating the technology-based platform elements with other market-based and function-based platform methods to truly yield a flexible and robust product platform design method.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():471-481. doi:10.1115/DETC2007-35685.

Component level reuse enables retention of value from products recovered at the end of their first lifecycle. Reuse strategies determined at the beginning of the lifecycle are aimed at maximizing this recovered value. Decision based design can be employed, but there are several difficulties in large scale implementation. First, computational complexities arise. Even with a product with a relatively small number of components, it becomes difficult to find the optimal component level decisions. Second, if there is more than one stakeholder involved, each interested in different attributes, the problem becomes even more difficult, due both to complexity and Arrow’s Impossibility Theorem. However, while the preferences of the stakeholders may not be known precisely, and aggregating those preferences poses difficulties, what is usually known is the partial ordering of alternatives. This paper presents a method for exploiting the features of a solution algorithm to address these difficulties in implementing decision based design. Heuristic methods including non-dominated sorting genetic algorithms (NSGA) can exploit this partial ordering and reject dominated alternatives, simplifying the problem. Including attributes of interest to various stakeholders ensures that the solutions found are practicable. One of the reasons product reuse has not achieved critical acceptance is because the three entities involved, the customers, the manufacturer and the government do not have a common ground. This results in inaccurate aggregating of attributes which the proposed method avoids. We illustrate our approach with a case study of component reuse of personal computers.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():483-498. doi:10.1115/DETC2007-35695.

Product designers seek to create products that are not only robust for the current marketplace but also can be redesigned quickly and inexpensively for future changes that may be unanticipated. The capability of a design to be quickly and economically redesigned into a subsequent product offering is defined as its flexibility for future evolution. Tools are needed for innovating and evaluating products that are flexible for future evolution. In this paper, a comprehensive set of design guidelines is created for product flexibility by merging the results of two research studies—a directed patent study of notably flexible products and an empirical product study of consumer products analyzed with a product flexibility metric. Via comparison of the results of these two studies, the product flexibility guidelines derived from each study are merged, cross-validated, and revised for clarity. They are organized in categories that describe how and under what circumstances they increase flexibility for future evolution. Examples are included to illustrate each guideline. The guidelines are also applied to an example application—the design of a new guitar string changer.

Commentary by Dr. Valentin Fuster
2007;():499-511. doi:10.1115/DETC2007-35882.

In recent years, sustainable product design has become a great concern to product manufacturers. An effective way to enhance the product sustainability is to design products that are easy to disassemble and recycle. An EOL strategy is concerned with how to disassemble a product and what to do with each of the resulting disassembled parts. A sound understanding of the EOL strategy from the early design stage could improve the ease of disassembly and recycling in an efficient and effective manner. We introduce a novel concept of eco-architecture which represents a scheme by which the physical components are allocated to EOL modules. An EOL module is a physical chunk of connected components or a feasible subassembly which can be simultaneously processed by the same EOL option without further disassembly. In this paper, a method for analyzing and optimizing the eco-architecture of a product in the architecture design stage is proposed. Using mathematical programming, it produces an optimal eco-architecture based on the estimation of the economic values and costs for possible EOL modules under the given environmental regulations.

Commentary by Dr. Valentin Fuster
2007;():513-521. doi:10.1115/DETC2007-34384.

Integral Building Design is done by multi disciplinary design teams and aims at integrating all aspects from the different disciplines involved in a design for a building such as; archtitecture, construction, building physics and building services. It involves information exchange between participants within the design process in amounts not yet known before. To support this highly complex process an Integral Building Design methods is developed based on the combination of a prescriptive approach, Methodical Design, and a descriptive approach, Reflective practice. Starting from the Methodical Design approach by van den Kroonenberg, a more reflective approach is developed. The use of Integral Design within the design process results in a transparency on the taken design steps and the design decisions. Within the design process, the extended prescriptive methodology is used as a framework for reflection on design process itself. To ensure a good information exchange between different disciplines during the conceptual phase of design a functional structuring technique can be used; Morphological Overviews (MO). Morphology provides a structure to give an overview of the consider functions and their solution alternatives. By using this method it is presumed that it helps to structure the communication between the design team members and a such forms a basis for reflection on the design results by the design team members. This method is used in the education program at the Technische Universiteit Eindhoven and was tested in workshops for students and for professionals from the Royal Institute of Dutch Architects (BNA) and the Dutch Association of Consulting Engineers (ONRI). Over 250 professionals participated in these workshops.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():523-535. doi:10.1115/DETC2007-34523.

This paper proposes a new perspective of using graph transformation systems as a way of organizing and solving engineering design problems. Using this novel technique the synthesis of optimal solutions in the form of graph topologies for design problems is made possible. Though the concept of graph grammars has existed for several decades in computer science literature, researchers in the field of design have now begun to realize the merit of using them to harness both the knowledge and heuristics of a particular problem domain. This paper examines the fundamental challenges in applying graph transformations in a design context. The paper also presents the first topology optimization method that has been developed specifically for domains representable by a graph grammar schema. This novel approach could also be used in several problems such as network problems (especially in determining the placement of hubs), electric circuit design, neural networks, sheet metal, and product architecture. The abstraction afforded by graphs also enables us to tackle multi-disciplinary problems found throughout engineering design. A few engineering examples are shown in this paper in order to illustrate the power of the approach in automating the design process.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():537-546. doi:10.1115/DETC2007-35310.

A major problem in product design is that the design process is not clear to designers. Therefore, every time designers develop a new product, they face difficulties in determining the order in which the product attributes should be determined, especially in the case of large, complicated products. This problem mainly occurs due to two reasons. First, the knowledge about past product designs is not well arranged and thus there is no way to utilize it. Therefore, this research focuses on developing a design support system that proposes a design process in which the designer can easily reflect the important attributes of a product while facing less difficulties in completing the design; this is done using a topdown design support system. In a top-down design system, the designer expresses the product knowledge using elements such as entity, attribute, constraint, interface, etc. Further, five types of knowledge are expressed in this system. They are: knowledge about product structure, knowledge about product entity, knowledge about product function, knowledge about product constraint, and knowledge about product design process. Since this research focuses on the design process, extracting knowledge about the product design process is very important. To extract this knowledge, we first compare the template of past products and the product currently being designed. Next, we calculate the consistency of the two models. Then, based on the results of the consistency calculation, we select and extract the available knowledge. We create a new process by using this extracted knowledge from the design template. It is possible to produce more than one process by combining the knowledge from more than one template. Finally, we evaluate the process from three perspectives: whether it is easy to reflect the customer requirements, whether the design conflict difficulty is small, and whether the design loop difficulty is small. Based on the evaluation result, the designers can select a process to design a new product. In this research, the ocean thermal energy conversion (OTEC) system is used as an example. Further, a process that can easily design the important attributes with a smaller possibility of breakdown than the existing process is chosen based on the results of applying a model proposed by this research. A well-organized design process has been achieved in the OTEC example. Future works must focus on improving the evaluation of the design process and the method for expressing the design knowledge as a template.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():547-558. doi:10.1115/DETC2007-35337.

Behavioral models, mathematical models of a system’s ability to meet customer needs, are useful evaluation tools throughout the design process of systems. Currently, behavioral modeling is conducted at a component level. The models used to evaluate a system are associated exclusively with the components used to solve a product’s desired functionality. As a result, it is often difficult to create behavioral models during the early stages of design when these component solutions have not been identified. However, during these early stages of design, information about the desired functionality of the system is known. The objective of the work presented in this paper is to develop a method that uses this information, in the form of functional models, as the basis for creating behavioral models. The paper proposes a five step method for creating the behavioral models from the functional model. Significant contributions from the work include reuse of behavioral model elements based on common functionality, swapping of model elements with varying fidelity, a framework for mathematical concept evaluation and selection and the linking of assumptions made during mathematical modeling to their effects on the functionality of the product and vice-versa. Two examples of the method are included, a summary example of a resistor network and a complete example based on the dynamic modeling of a Formula SAE racecar. Conclusions from the work and examples are presented along with areas of future research.

Topics: Modeling
Commentary by Dr. Valentin Fuster
2007;():559-571. doi:10.1115/DETC2007-35634.

In this paper, a published ontology of engineering design activities is modeled and analyzed using the design structure matrix (DSM). Specifically, the ontology analyzed in this research provides a basis for describing engineering design activities and subsequently design processes in an unambiguous manner. However, the proposed ontology lacks a computational representation and the information flow between activities is not adequately described. Thus, complex design processes cannot be represented using the ontology. The design activity ontology is modeled and analyzed using the DSM. First, the information flows between design activities are identified and their inter-relationships are described. Four different cases for representing the flow of information between design activities are modeled. In Case 1 and 2 feedback between information output and information input within an activity is captured. Whereas, in Case 3 and 4 it is assumed that no feedback between output and input exists within an activity. DSM analyses, including partitioning and tearing, are performed on the model. Observations and conclusions drawn from these analyses include the further decomposition of design activities, grouping of design activities, and lack of information flow between seemingly related activities. Based on these observations, recommendations are made to refine the ontology. Finally, additional research is required for developing a computational ontology of design activities.

Commentary by Dr. Valentin Fuster
2007;():573-580. doi:10.1115/DETC2007-34541.

With markets becoming more and more fragmented, the management of product variety becomes even more critical. Variety management needs to be continuously improved, especially for highly customized products. Although new techniques are constantly being developed, variety is still an issue, and there is room for complementary assistance to manage variety in the final design. In this context, we propose an original method — merge-based design — to manage variety in a product family better. The proposed method targets the already-tailored (unique) components to reduce the number of variant components in the family. Merge-based design also facilitates customization by enabling designers to reduce non-beneficial variety within a family. The proposed method is described and then illustrated via a case study involving two existing internal structures from single-use cameras. Finally, to highlight for improving customization, a proposed new camera is created using the resulting common structure with a different exterior casing. This new method can be applied during detailed studies as well as in the early stages of the design process.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():581-591. doi:10.1115/DETC2007-34696.

Disciplined product development has been a hallmark of mature companies for many decades, resulting in shorter development cycles, reduced costs, and higher quality products. Unfortunately, these tools and processes have typically been applied in large, well-established firms, not start-up companies. In this paper, we describe a simplified new product development process for early-stage firms and its application to a consumer product in which the process was executed during a 14-month development cycle. The process consists of 15-steps in 3-phases, two decision gates, and provides a step-by-step guide for development, with specific call-outs as to what, when, and where tools such as market segmentation, platform planning, industrial design, and cost modeling should be applied. The proposed process is applied to design a new consumer product, and the case study results are discussed with specific emphasis on costs, duration, and applicability of the process and its related engineering tools. Finally, we conclude with comments on the limitations of the proposed process, potential improvements, and future work.

Commentary by Dr. Valentin Fuster
2007;():593-601. doi:10.1115/DETC2007-34958.

MEMS projects are well known for their lengthy development times, hindering a company’s ability to make MEMS product development profitable. This paper describes a three-pronged methodology for rapid development of a piezoelectric MEMS microphone, utilizing concurrent design and prototyping, leveraged process technology, and a modified version of Quality Function Deployment (QFD). Avago Technologies has produced more than 300 million Film Bulk Acoustic Resonator (FBAR) piezoelectric band pass filters. FBAR uses Aluminum Nitride (AlN) as the piezoelectric film. Volume production of FBAR makes Avago the world’s only high volume producer of thin-film AlN products. This high-volume FBAR production process was greatly leveraged to enable fast prototyping of piezoelectric MEMS microphones. The concurrent design concept of simultaneously iterating on technical theory, finite element modeling, and prototyping with confirmation from testing was employed as another means of enabling swift development progress. QFD helps designers utilize the ‘voice of the customer’ to determine which product specifications are the most essential, and has long been used as a successful design methodology in the heavy industrial and automotive industries [1]. QFD and most other design methodologies have rarely been applied to MEMS products [2]. The second phase of QFD was modified for better application to MEMS products. Both Phase I and Phase II of QFD were then employed to guide the development process, giving insight into which elements of the design to focus on, which design concepts had the most merit, and which potential applications were the best fit to the technology. The combined effect of these three methods was extremely rapid development, enabling prototyping of hundreds of design variations and brisk improvement of measured results during the first eight months of the program. Achieving technical results quickly while assessing potential applications can aid in identifying a fast path to market. The methods used in this case study can easily be generalized for application to other MEMS development programs, potentially enabling MEMS products to reach production more quickly and generate increased profitability through addressing applications that best fit the technology and design.

Commentary by Dr. Valentin Fuster
2007;():603-611. doi:10.1115/DETC2007-34967.

Companies struggle to identify new business opportunities based on their core competence be it products or services. When a company focuses on improving current offerings or is too involved in them, it has difficulty discovering new applications for them. Scenario Graph is an original design method for products or services that aids design teams to envision four types of information while in the market definition phase: potential user locations, activities associated with the location, user circumstances, and the corresponding user state. By using Scenario Graph, design teams are better able to capture new values, scenarios, and behaviors of potential customers. This knowledge captured in the fuzzy front end stage can then be translated into inputs to other Design for X (dfX) tools to improve the definition of the product or service. Another benefit of the tool is that it directs the design team to discover unidentified failure modes of the current offerings by identifying unintended user scenarios. In this paper, a detailed guide, along with case studies demonstrates the usefulness of the tool when applied in the early phase of product or service development. Scenario Graph will assist design teams and managers in discovering new product or service opportunities.

Topics: Failure
Commentary by Dr. Valentin Fuster
2007;():613-624. doi:10.1115/DETC2007-35733.

Even though the literature on product and process development is extensive, not much attention has been devoted to categorizing the product development process itself. Existing work on product development processes such as Total Design, Integrated Product and Process Design among others advocate common approaches that should be followed throughout the organization, without any consideration of product characteristics. In this paper we review several existing development methodologies. Extensions of these are categorized by their applicability to different classes of products. We propose that development processes should be matched to product attributes and organization goals. Towards this end, we associate development processes along with their components such as House of Quality, Robust Design, TRIZ etc. with goals such as time to market, customer needs satisfaction, intellectual property generation, protection and exploitation, quality, product cost and others. We examine the impact of this association on the development process itself and propose guidelines for constructing specific processes associated with one or more goals. Tools and benchmarks for various applications are discussed, along with some case studies on the design of different development processes.

Commentary by Dr. Valentin Fuster

1st International Conference on Micro- and Nanosystems

2007;():627-632. doi:10.1115/DETC2007-34570.

Through adaptation of an atomic force microscope, we have developed a peel test at the micro- and nanoscale level that has the capability of investigating how long flexible nanotubes, nanowires, nanofibers, proteins, and DNA adhere to various substrates. This novel atomic force microscopy (AFM) peeling mode extends existing AFM “pushing” and “pulling” force spectroscopies by offering practical knowledge about the complex interplay of nonlinear flexure, friction, and adhesion when one peels a long flexible molecule or nanostructure off a substrate. The static force peeling spectroscopies of straight multiwalled carbon nanotubes suggest that a significant amount of the total peeling energy is channeled into nanotube flexure. Meanwhile dynamic force spectroscopies offer invaluable information about the dissipative physical processes involved in opening and closing a small “crack” between the nanotube and substrate.

Commentary by Dr. Valentin Fuster
2007;():633-641. doi:10.1115/DETC2007-34801.

This article presents a Monte Carlo finite element method (MCFEM) for determining the Young’s modulus (YM) of polymer nanocomposites (PNC) using Nanoindentation (NI) data. The method treats actual NI data as measurements of the local YM of PNC; it further assesses the effect of the nonhomogeneous dispersion of carbon nanotubes in polymers on the statistical variations observed in experimental NI data. First the method simulates numerically NI data by developing a random field and a multiscale homogenization model. Subsequently, the MCFEM applies the spectral representation method to generate a population of samples of local YM values. These local values are then used in conjunction with a stochastic finite element scheme to derive estimates for the YM of PNC. The statistical processing of the ensemble of FE solutions yields overall YM values that agree well with corresponding results reported in the literature.

Commentary by Dr. Valentin Fuster
2007;():643-651. doi:10.1115/DETC2007-35136.

In this paper, we report numerical and experimental studies of the Joule heating-induced heat transfer in fabricated T-shape microfluidic channels. We have developed comprehensive 3D mathematical models describing the temperature development due to Joule heating and its effects on electrokinetic flow. The models consist of a set of governing equations including the Poisson-Boltzmann equation for the electric double layer potential profiles, the Laplace equation for the applied electric field, the modified Navier-Stokes equations for the electrokinetic flow field, and the energy equations for the Joule heating induced conjugated temperature distributions in both the liquid and the channel walls. Specifically, the Joule number is introduced to characterize Joule heating, to account for the effects of the electric field strength, electrolyte concentration, channel dimension, and heat transfer coefficient outside channel surface. As the thermophysical and electrical properties including the liquid dielectric constant, viscosity and electric conductivity are temperature-dependent, these governing equations are strongly coupled. We therefore have used the finite volume based CFD method to numerically solve the coupled governing equations. The numerical simulations show that the Joule heating effect is more significant for the microfluidic system with a larger Joule number and/or a lower thermal conductivity of substrates. It is found that the presence of Joule heating makes the electroosmotic flow deviate from its normal “plug-like” profiles, and cause different mixing characteristics. The T-shape microfluidic channels were fabricated using rapid prototyping techniques, including the Photolithography technique for the master fabrication and the Soft Lithography technique for the channel replication. A rhodamine B based thermometry technique, was used for direct “in-channel” measurements of liquid solution temperature distributions in microfluidic channels, fabricated by the PDMS/PDMS and Glass/PDMS substrates. The experimental results were compared with the numerical simulations, and reasonable agreement was found.

Commentary by Dr. Valentin Fuster
2007;():653-658. doi:10.1115/DETC2007-35442.

We will present a method to fabricate a new class of hybrid composite structures based on highly organized multiwalled carbon nanotube (MWNT) and singlewalled carbon nanotube (SWNT) network architectures and a polydimethylsiloxane (PDMS) matrix for the prototype high performance flexible systems which could be used for many daily-use applications. To build 1–3 dimensional highly organized network architectures with carbon nanotubes (both MWNT and SWNT) in macro/micro/nanoscale we used various nanotube assembly processes such as selective growth of carbon nanotubes using chemical vapor deposition (CVD) and self-assembly of nanotubes on the patterned trenches through solution evaporation with dip coating. Then these vertically or horizontally aligned and assembled nanotube architectures and networks are transferred in PDMS matrix using casting process thereby creating highly organized carbon nanotube based flexible composite structures. The PDMS matrix undergoes excellent conformal filling within the dense nanotube network, giving rise to extremely flexible conducting structures with unique electromechanical properties. We will demonstrate its robustness under large stress conditions, under which the composite is found to retain its conducting nature. We will also demonstrate that these structures can be directly utilized as flexible field-emission devices. Our devices show some of the best field enhancement factors and turn-on electric fields reported so far.

Commentary by Dr. Valentin Fuster
2007;():659-667. doi:10.1115/DETC2007-35451.

There is growing evidence of the importance of mechanical deformations on various facets of cell functioning. This asks for a proper understanding of the cell’s characteristics as a mechanical system in different physiological and mechanical loading conditions. Many researchers use atomic force microscopy (AFM) indentation and the Hertz contact model for elastic material property identification under shallow indentation. For larger indentations, many of the Hertz assumptions are not inherently satisfied and the Hertz model is not directly useful for characterizing nonlinear elastic or inelastic material properties. We have used exponential hyperelastic material in FE simulations of the AFM indentation tests. A parameter identification approach is developed for hyperelastic material property determination from the simulated data. We collected AFM indentation data on agarose gel and developed a simple algorithm for contact point detection. The contact point correction improves the prediction of elastic modulus over the case of visual contact point identification. The modulus of 1% agarose gel was found to be about 15 kPa using the proposed correction, with mild but non-trival hardening with deeper indentation. The experimental data is compared with the results from the FE simulations and shows that over the hardening portion of the indentation response, our proposed parameter identification approach successfully captures the experimental data.

Commentary by Dr. Valentin Fuster
2007;():669-677. doi:10.1115/DETC2007-34175.

As the ability to manipulate materials and components at the nanoscale continues to grow, it will become increasingly critical to understand the dynamic interactions that occur among multiple components. For example, the dynamic interactions among proteins (i.e., nanoscale molecular machines) lead to complex, emergent behaviors such as photosynthesis, self-repair, and cell division. Recently, the research group at Sandia National Labs and The Center for Integrated Nanotechnologies (CINT), headed by George Bachand, has developed a molecular transport system capable of transporting and manipulating a wide range of nanoscale components. This system is based on the kinesin motor molecule and cytoskeletal filament microtubules (MTs), in which the kinesin are mounted to a substrate in an inverted fashion, and capable of binding and transporting the MTs across a surface as a molecular shuttle. In the presence of ATP, the kinesins are capable of generating ∼40 pN·nm of work, and transporting MTs along the substrate at velocities of ∼1 micro-m/sec. The MTs may also serve as a transport platform for various inorganic and biological nanoparticles. During transport, the cargo is transferred, via elastic collisions, from one MT to another or to where two MT carry a single cargo. Bending of the MT and various other elasto-dynamic phenomena such as particle ejection, MT sticking, etc are observed via fluorescence microscopy. The interaction observed by the Bachand team is not unlike the interaction of macroscale devices. The kinesin provide motivation to the MT via a hand-over-hand ratchet like motion driven by ATP hydrolysis. As the kinesin motor domains come into contact with and bind the MT, it is not inconceivable to think of this action from the framework of instantly applied constraints in a manner similar to the macroscopic action of devices coming into and out of constrained interaction. The hypothesis of our work is that the elasto-dynamic phenomenon observed can be modeled with the tools of multiple body dynamics modeling. The modeling perspective is based on the lead author’s hybrid parameter multiple body dynamics modeling methodology. This technique is a variational approach based on the projection methods of Gibbs-Appell. The constrained interaction through contact and impact are modeled with the idea of instantly applied non-holonomic constraints, where the interactions on the boundaries and in the domain of elastic continua are modeled via projections of the d’Alembert force deficit along conjugate directions generated via so called pseudo-generalized-speeds. In this paper we present motivation for our approach, the underlying modeling theory, and current results of our efforts at understanding the kinesin/MT shuttle system interaction.

Commentary by Dr. Valentin Fuster
2007;():679-682. doi:10.1115/DETC2007-34321.

A simple micromolding method to fabricate PLGA microstructures made up of microchannels with circular cross section is presented. The thermal reflow technique is adopted to fabricate the semi-cylindrical photoresist master. The PLGA solution is prepared by dissolving PLGA polymer in acetone and then casting the solution onto the semi-cylindrical photoresist master to produce PLGA microstructures. Two PLGA membranes are bonded together to form the circular microchannels consisted microstructures. A microvessel scaffold for tissue engineering by implementing the proposed method is fabricated. Roundness of the microchannels is verified.

Commentary by Dr. Valentin Fuster
2007;():683-686. doi:10.1115/DETC2007-34322.

A novel and very simple chloroplastmimic photovoltaic scheme, in which water is photolyzed by a new photocatalyst fabricated by depositing a thin film of TiO2 on an array of carbon nanotubes (CNT), has been made. Multiple reflections within the photocatalyst extend the optical response from the ultraviolet range to the full visible range. Hydrogen ions with various concentrations are separated by an artificial thylakoid membrane, resulting in a transmembrane chemiosmotic potential, generating ion-diffusion-induced electricity. Experimental results demonstrate that the proposed simple chloroplastmimic photovoltaics can produce a photocurrent directly from visible light.

Commentary by Dr. Valentin Fuster
2007;():687-691. doi:10.1115/DETC2007-34390.

One of the most important areas of research in the material science field is the manufacture of fine particles. Small particles have an improved dissolution rate and are therefore more easily absorbed into biological in-vivo tissue or the skin surface layer. This study investigates the effects of temperature and pressure on the particle size of green tea powder manufactured via the rapid expansion supercritical solution method. The dissolution rate of the green tea extract in the supercritical carbon dioxide solution is enhanced via the addition of a modest quantity of ethyl alcohol. The size of the particles produced under different temperature and pressure conditions is analyzed using a laser particle size analyzer. The results show that under constant temperature conditions, a higher pressure causes the particle size decreased. Meanwhile, for a constant pressure, both the particle size and the volume of powder produced reduce as the temperature is increased. Overall, the results show that a minimum particle size of 58 nm is obtained at a pressure of 2000 psi and a temperature of 45°C.

Commentary by Dr. Valentin Fuster
2007;():693-717. doi:10.1115/DETC2007-34454.

Microcantilever with embedded piezoresistor has been applied to in-situ surface stress measurement of biochemical reaction, where parallel microcantilever design by using an active cantilever for biosensing and another reference cantilever for noise cancellation has previously been proposed. This paper shows that the measurement is sensitive to the temperature effect induced by the piezoresistor. The temperature difference between the two cantilevers can reach 40°C at 10V operation because of their difference thermal capacitance. For the microcantilever of 125×65×0.75 μm, the offset voltage of the parallel microcantilever is 1.65 mV and the temperature drift is 0.01 mV/°C. An improved parallel microcantilever design is developed using the stripe pattern design on the immobilized layer and the signal conditioning circuit for temperature compensation in biosensors. Analyses and experiments show that the performance of a CMOS sensor chip can be significantly improved.

Commentary by Dr. Valentin Fuster
2007;():719-724. doi:10.1115/DETC2007-34866.

A model for computing the force from a gas film squeezed between parallel plates was recently developed using Direct Simulation Monte Carlo simulations in conjunction with the classical Reynolds equation. This paper compares predictions from that model with experimental data. The experimental validation used an almost rectangular MEMS oscillating plate with piezoelectric base excitation. The velocities of the suspended plate and of the substrate were measured with a laser Doppler vibrometer and a microscope. Experimental modal analysis yielded the damping ratio of twelve test structures for several different gas pressures. Small perforation holes in the plates did not alter the squeeze-film damping substantially. These experimental data suggest that the model predicts squeeze-film damping forces accurately. From this comparison, it is seen that these structures have a tangential-velocity accommodation coefficient close to unity.

Commentary by Dr. Valentin Fuster
2007;():725-730. doi:10.1115/DETC2007-35018.

The taping mode Atomic Force Microscopic (T-AFM) can be properly described by a sinusoidal excitation of its base and nonlinear potential interaction with sample. Thus the cantilever may cause chaotic behavior which decreases the performance of the sample topography. In this paper a nonlinear delayed feedback control is proposed to control chaos in a single mode approximation of a T-AFM system. Assuming model parameters uncertainties, the first order Unstable Periodic Orbits (UPOs) of the system is stabilized using the sliding nonlinear delayed feedback control. The effectiveness of the presented methods is numerically verified and the results show the high performance of the controller.

Commentary by Dr. Valentin Fuster
2007;():731-740. doi:10.1115/DETC2007-35066.

Atomic Force Microscopy (AFM) is a major imaging tool used to map surfaces down to atomic resolution. However, scanning rates in AFM are still low, and attempts to increase the speed usually end up with low-resolution pictures. In order to address this deficiency we propose a novel model that treats the scanning element as a moving continuous microcantilever, which undergoes a combined spatial motion in both the horizontal and the vertical directions. This research investigates the effect of increasing the scan speed on the dynamics and stability of a vibrating microcantilever that is governed by a specified control law. We reduce the spatio-temporal model to a rigid body two-degrees-of-freedom system, which is connected to a linear digital controller. Results demonstrate that the digital controller stabilizes the nonlinear system and enables a smooth transition from one side to the other side of the sample, needed for the scanning process.

Commentary by Dr. Valentin Fuster
2007;():741-748. doi:10.1115/DETC2007-35115.

A 6DOF Stewart platform driven by piezoelectric actuators was designed for applications in need of nanoscale positioning. By using flexural joints and an error compensation model based on a minimum-points-3-axes measurement method, the manufacturing and assembly errors can be offset. The design of a feedforward controller that is able to reduce the nonlinear hysteresis effect of the piezoelectric actuator is the focus of this article. A dynamic Preisach model is developed to improve the accuracy of hysteresis model, whose inverse model is used as the feedforward controller. Such a control scheme is cost-effective without employing expensive sensors for feedback control. Experimental data shows that the platform can achieve the objective of nanoscale positioning.

Commentary by Dr. Valentin Fuster
2007;():749-757. doi:10.1115/DETC2007-35400.

This paper presents research on the development of microelectromechanical systems (MEMS) micromirror arrays with precise and accurate positioning enabled by the use of closed-loop control techniques. The MEMS mirror arrays are one degree-of-freedom, electrostatically actuated and exhibit nonlinear actuation profiles that include pull-in and hysteresis. The device performance is subject to parametric uncertainties from the fabrication process. Preliminary proportional-integrator (PI) controllers are studied in simulation on the nonlinear system to explore issues in the control development. Two different model linearization methods are presented to examine the best way to approximate the nonlinear behaviors using linear models. The effects of parametric uncertainties on the open-loop plant response are considered. It is evident based on these studies that the open-loop response is very sensitive to model uncertainties, while using closed-loop control can achieve input-tracking despite uncertainties in the plant. The work-in-progress includes the development of an optical test bed for the experimental validation of the results presented in this paper.

Commentary by Dr. Valentin Fuster
2007;():759-768. doi:10.1115/DETC2007-35552.

We present an analysis of the electromechanical behavior and stability of a capacitive-based Micro Electro Mechanical Systems (MEMS) device with non-monotonous stiffness-deflection dependence. As an example, we consider a flexible initially curved double clamped micro beam actuated by a distributed electrostatic force. Since the system exhibits both mechanical snap-through buckling and electrostatic pull-in instability, the equilibrium curve has two bifurcation points implying the existence of multiple equilibrium configurations. The multistability phenomenon described in the present work is a result of interaction between mechanical and electrostatic nonlinearities of the system and differs from the electrostatic pull-in based bistability and mechanical bistability associated with the snap-through buckling. The governing equations of the geometrically nonlinear curved Euler-Bernoulli beam are formulated in the framework of the shallow arch approximation. Actuating force is calculated using second order perturbation solution of the Laplace equation for an electric potential. A coupled electro mechanical model is built by the Rayleigh-Ritz method with linear undamped eigenmodes of a straight beam as base functions. After verification of the model results, we analyze the influence of initial geometry of the beam on the location (in terms of actuation voltage and deflections) of the critical points on the bifurcation diagram. It was found that for snap-through to take place, the initial elevation of the beam should be larger than a certain value whereas the existence of electrical pull-in instability is unconditional. In addition, the stable relative deflection of a curved beam is larger than of initially straight beam. Based on the model results, we present an example of a multistable actuator design. Devices of various configurations were fabricated of single crystal silicon using deep reactive ion etching and the existence of multiple stable states and multiple instability points was demonstrated experimentally.

Commentary by Dr. Valentin Fuster
2007;():769-774. doi:10.1115/DETC2007-35559.

We present a novel scheme for a robust micro gyroscope excited parametrically that reduces the sensitivity loss due to mismatching in the drive and the sense natural frequencies, which is a common problem in micro gyroscopes based on harmonic oscillators. We demonstrate experimentally that sing a parametric resonance based actuator, the drive-mode signal has rich dynamic behavior with a large response in a large bandwidth. In this way the system is able to induce oscillations in the sense-mode by Coriolis force coupling, despite a clear disparity on their fundamental frequencies. Thus we propose a micro gyroscope that is less sensitive to parameter variations due to its inherent dynamical properties.

Topics: Resonance
Commentary by Dr. Valentin Fuster
2007;():775-784. doi:10.1115/DETC2007-35114.

Analytical modeling of selectively compliant mechanisms for quantifying the nano-scale parasitic motion is presented. Flexure-based compliant mechanisms are capable of meeting the demanding requirements of the partially constrained ultraprecision motion systems. However, the geometric errors induced by manufacturing tolerances can limit the precision capability. Understanding parasitic motion at the nano-scale necessitates a 3-D model even for mechanisms that are designed to be planar. A spatial kinematics based kinetostatic model is used here. This approach systematically accounts for the geometric errors, and enables estimation of the inherently spatial parasitic motion. Using insights from screw theory, the parasitic motion is classified into intrinsic mechanism errors, and errors that can be minimized by calibration procedures. A metric that quantifies the intrinsic parasitic motion and characterizes the precision capability of the mechanism is identified. Monte Carlo simulation is used to propagate the variance of the geometric errors through the model to determine the statistical moments of the chosen metric. To illustrate the approach, the modeling and analysis is applied to a classical four-bar mechanism with flexure joints. The model is further used to investigate the key system parameters that influence the intrinsic parasitic motion in the mechanism. The simulation results indicate more than 50% improvement in the precision capability of the four-bar mechanism by improved design of flexure joints, without changing the manufacturing tolerance limits.

Commentary by Dr. Valentin Fuster
2007;():785-791. doi:10.1115/DETC2007-35334.

Optical traps have been used in a multitude of applications in biology, such as stretching DNA molecules and bacterial tails, due to their ability to study molecules in solution. In addition to being capable of applying and sensing forces, optical traps also have the ability to accurately apply and sense torques. Birefringent particles experience a torque when trapped in elliptically polarized light which has resulted in rotation rates over 400 Hz. By measuring the frequency content of the exiting beam, the rotation rates can be tracked and controlled. Here we describe an optical trap with feedback torque control that maintains rotational rates in the presence of increasing drag. As a result, this research has the potential to advance the understanding of rotary motor proteins such as F1ATPase.

Commentary by Dr. Valentin Fuster
2007;():793-799. doi:10.1115/DETC2007-35348.

Atomic force microscope (AFM) based anodization nanolithography on semiconducting layers is a useful tool for nanoscale fabrication. A custom AFM patterning technique has been created that couples CAD with the lithographic capabilities of the AFM. Designed nanostructures to be deposited on a silicon substrate are rendered as a three-dimensional model using CAD. AFM based anodization nanolithography is then used to replicate the features at the nanoscale using automated voltage bias and humidity modulation as prescribed by the model and dictated by the system. The work presented outlines the advantages and limitations when three-dimensional modeling is combined with nanoscale lithography using an AFM.

Commentary by Dr. Valentin Fuster
2007;():801-807. doi:10.1115/DETC2007-35430.

In AFM-based single molecule force spectroscopy, it is assumed that the pulling angle is negligible and that the force applied to the molecule is equivalent to the force measured by the instrument. Although this assumption may hold for flexible, compact molecules, studies have shown that it may not be appropriate for fairly rigid molecules, where measured forces can be a fraction of the actual values experienced by the molecule. Because the pulling geometry can substantially influence the values measured by the AFM, we investigate a method to minimize the pulling angle prior to conducting a pulling experiment. The method presented herein uses small circular movements to locate the molecule’s substrate attachment site and reposition the cantilever. By using data gathered from a previous study, we were able to repeatedly align a molecule’s attachment sites via simulation of the program, thereby demonstrating the effectiveness of the alignment method.

Commentary by Dr. Valentin Fuster
2007;():809-815. doi:10.1115/DETC2007-35756.

The use of a flared tip and bi-directional servo control in some recent atomic force microscopes (AFM) has made it possible for these advanced AFMs to image structures of general shapes with undercuts and reentrant surfaces. Since AFM images are distorted representation of sample surfaces due to the dilation (a.k.a. convolution) produced by the finite size of the probe, it is necessary to obtain the tip shape in order to correct such tip distortion. This paper presents an approach that can for the first time estimate general three-dimensional tip shape from its scanned image in these AFMs. It extends one existing blind tip estimation method to the dexel representation, a computer representation that can represent general 3D shapes. As such, it can estimate general tip shapes, including undercuts or reentrant features.

Commentary by Dr. Valentin Fuster
2007;():817-822. doi:10.1115/DETC2007-34676.

A major drawback of magnetic linear micro actuators is the vertical attractive force between stator and traveler. In the case of a micro step motor, this force is typically one order of magnitude greater than the driving force itself. To compensate for this undesired vertical force and thus taking full advantage of the driving force available, a magnetic levitation system was developed and implemented as a guide. The electromagnetic field generated by the stator coils interacts with the field of permanent magnets positioned in the traveler. This way, the traveler is elevated. Since repulsive magnetic levitation systems are inherently unstable, a tribological guide was integrated on both sides of the magnetic levitation system. During motion, the combination of stationary coils in the stator and moving permanent magnets in the traveler lifts up the traveler, while the lateral tribological guide prevents the traveler from shifting sideways. Initial investigations proved the feasibility of this magnetic levitation concept (1). A complete linear micro step motor system with magnetic levitation guide consists of the micro step motor itself, the magnetic micro levitation system (including the lateral guides), and a capacitive air gap measurement system. The latter one detects the size of the air gap between stator and traveler of the micro actuator. An assembly consisting of the three components results in a linear micro actuator system with adjustable air gap. For achieving optimal working conditions of the linear micro step motor, the magnetic levitation system was designed for a nominal air gap of 8 μm at the micro step motor. While earlier work proved the feasibility of such a guide, it also indicated (i) that the levitation system has to be capable of correcting a pitch motion of the traveler and (ii) that an as high magnetic levitation force as possible is desirable. To address the first issue, the magnetic levitation system received four coils arranged in a square along the axis of motion of the micro step motor. This way, both pitch and roll may be controlled. For resolving the second issue, the number of coil layers was increased from two to four. The technology for such a four layer coil is quite challenging, particularly since every effort has to be made to minimize its building height. The challenges were resolved by creating a coil system where the lateral insulation between conductors consists of SU-8™ (a photosensitive epoxy by Micro Resist Technology), while the vertical insulation layers were formed by a thin, stress compensated Si3 N4 film. This way, a very compact coil with a high conductor-to-insulator ratio and thus a great current conducting capability could be realized. Due to the thin Si3 N4 insulation, it also features an excellent thermal conductivity.

Commentary by Dr. Valentin Fuster
2007;():823-828. doi:10.1115/DETC2007-34677.

When developing linear micro motors, the synchronous drive scheme is well suited since it offers relatively high driving forces while keeping the design, fabrication, and assembly relatively simple. After the feasibility of the application of the synchronous drive scheme in a micro motor was proven, a smaller version of the motor was developed to investigate this drive scheme’s miniaturization potential. Apart from scaling down the motor dimensions, further optimizations were applied using the results of FEM simulations. The micro motor was fabricated, assembled, and successfully tested. The results demonstrate that the synchronous linear micro motor was successfully scaled down. Furthermore, the results indicate that even a further miniaturization seems feasible.

Commentary by Dr. Valentin Fuster
2007;():829-834. doi:10.1115/DETC2007-34982.

Many applications in microelectromechanical systems require physical actuation for implementation or operation. On-chip sensors would allow control of these actuators. This paper presents experimental evidence showing that a certain class of thermal actuators can be used simultaneously as an actuator and a sensor to control the actuator’s force or displacement output. By measuring the current and voltage supplied to the actuator, a one-to-one correspondence is found between a given voltage and current and a measured displacement or force. This truly integrated sensor/actuator combination will lead to efficient, on-chip control of motion for applications including microsurgery, biological cell handling, and optic positioning.

Commentary by Dr. Valentin Fuster
2007;():835-840. doi:10.1115/DETC2007-35344.

We present a method to detect the non-uniform elastic property changes of sensor coatings on microcantilever arrays due to radiation, analyte binding or adsorption. The method uses measurements of the resonance frequencies of higher order flexural modes to identify with high sensitivity the location and magnitude of non-uniform elasticity changes in a microcantilever coating. We validate theoretical predictions and demonstrate the method by monitoring the time evolution of resonance frequencies of different flexural modes of microcantilevers functionalized with a small drop of a photosensitive polymer as it is exposed to ultraviolet (UV) radiation. The method is particularly well-suited for measuring quantitatively the time varying elastic properties of thin films or biological materials attached to microcantilevers.

Commentary by Dr. Valentin Fuster
2007;():841-844. doi:10.1115/DETC2007-35878.

We present a normally open piezoelectric actuated micro valve, based on the novel concept of micro and fine machining technology. This new design allows a wide controllable range for high flow at a high pressure difference between inlet and outlet. This promising combination of micro and fine machining (piezoelectric actuator) provides the opportunity to take the steps that control the flow precisely. The use of piezoelectric actuator provides the opportunity of continuous control of gas flow at any stage during entire valve operation. In our previous design, larger steps caused by the friction between screw threads, limits the controllability [1]. Additionally the power consumption is low as piezoactuator needs power only to take steps to control the flow of gas.

Commentary by Dr. Valentin Fuster
2007;():845-850. doi:10.1115/DETC2007-34206.

The TIM (Thermomechanical In-plane Microactuator) is a thermal actuator that offers a high output force at a low input voltage, in a design that can be easily modified to match force and displacement requirements of various applications. The purpose of this paper is to examine factors that affect the steady-state power requirements of a TIM. Reducing the power requirements of the TIM is critical for its use in some systems such as autonomous microsystems. The influence of several geometric modifications and one environment change on energy loss and actuator efficiency was investigated. The steady-state deflection of five different TIM designs was measured for various levels of input power in both air and vacuum. The extent of the power reduction for the most efficient design in air varied with deflection from about 40 percent at 4 μm deflection to 20 percent at a deflection of 8 μm. The most significant reduction in power was observed for devices tested under vacuum where conduction from the legs through the air to the substrate was minimal due to reduced heat losses at the low pressure.

Commentary by Dr. Valentin Fuster
2007;():851-854. doi:10.1115/DETC2007-35284.

In this paper, an approach to condense micro droplet with aimed volume and to maintain its volume is shown. The approach is performed experimentally by means of controlled temperature sequence. Peltiert element was used to control the temperature of copper surface. Pico litter ordered droplet was observed and its volume was estimated by video image processing. From the heat balance analysis, the availability of condensation strategy and the guiding principle of the design of capillary-based micromanipulator are discussed.

Commentary by Dr. Valentin Fuster
2007;():855-863. doi:10.1115/DETC2007-35587.

Due to their high surface to volume ratio, microsystems are characterized by great superficial forces, which become dominant with respect to inertial ones. Superficial interactions influence fabrication processes as well as working conditions of microsystems and make most of the techniques used at the macrolevel inadequate at the microlevel. In particular, the traditional manipulation techniques are often not suitable for the fabrication of hybrid microsystems and the development of new handling techniques for microcomponents is strongly required. This has aimed a large number of recent studies that have been addressed the possibilities of controlling and exploiting superficial forces in order to manipulate microobjects. In this context, this paper is focused on a new handling system based on the capillary force; in particular, it concerns the first investigations on the use of smart materials for the realization of an innovative manipulation system. A gripper with variable curvature has been theoretically studied and a first prototype has been developed. It has demo good ability in performing accurate pick & place operations of component of the millimetric size. The results obtained from this prototype have, then, encouraged the development of a smaller prototype, able to manipulate objects with micrometric size. Due to the reduced dimensions of the prototype, smart materials have been considered suitable for the actuation of such a gripper. Therefore, different materials and configurations have been conceived and a novel configuration based on electroactive polymers (EAP) has been studied. A feasibility study has been carried out in order to evaluate their functionality and performances as actuator and the results are presented.

Topics: Smart materials
Commentary by Dr. Valentin Fuster
2007;():865-870. doi:10.1115/DETC2007-35599.

Manipulation of micron sized components is essential for microassembly. Understanding the dominant adhesive forces in the micro-scale as well as devising techniques to control them is needed in order to design a proper micro manipulation apparatus. A liquid bridge based micromanipulation scheme is presented in this paper. The adhesive forces like capillary and surface tension force are prominent in micro scale due to scaling law and provides sufficient force for pickup of an object. The main problem resides in the systematic release of the object from the gripper surface. The focus of this paper is on the feasibility study of contact angle manipulation by electrowetting method for prompt release of an object. Preliminary results from numerical solution of Laplace-Young equation and CFD analysis shows that by increasing the contact angle a critical contact angle is reached after which Laplace-Young equation does not produce a feasible solution and the CFD analysis results in an unstable solution. This result demonstrates that the contact angle manipulation is capable of breaking a liquid bridge and provides a feasible solution for the release mechanism in microgripping.

Commentary by Dr. Valentin Fuster
2007;():871-880. doi:10.1115/DETC2007-35784.

This paper introduces the concept of a novel compliant micromanipulator that is capable of manipulating irregularly shaped micro-scale objects by positively clasping the object. The controlled clasp capability of the micromanipulator can be useful to accomplish the manipulation of a wide range of micro-scale objects and biological specimens, especially those with irregular shapes and/or floating in a liquid medium where traditional tweezers or grippers are cumbersome or unsuitable. The monolithic structure of the micromanipulator comprises of two distinct parts: a body and a clasp. The body has a topology that magnifies a single rectilinear input actuation into two larger displacements at the input points to the clasp mechanism. The clasp mechanism comprises of rigid links connected by rotary joints in the form of low-resistance serpentine flexures. The mechanism “clasps” the target object by enveloping the object with a continuous mechanical boundary that eventually closes inwards and “locks” the object within the boundary. The paper presents a systematic design of the compliant micromanipulator and the analytical model governing the behavior of the clasp using topology optimization techniques and energy methods.

Commentary by Dr. Valentin Fuster
2007;():881-888. doi:10.1115/DETC2007-34738.

Gears, bearings, springs, fasteners etc. are some typical machine elements used to build machines and mechanical systems. These elements are used for performing one or more of functions such as, to transmit motion, to support moving members, to store energy, to join two components etc. The continuous miniaturization and the need for mechanical systems having sizes of a few nanometers have led to new but challenging issues in design and manufacturing of these machine elements. Carbon nanotubes have a potential to be used as machine elements in multiple roles for building mechanical systems at a nano level. This paper explores the use of single-walled and multi-walled carbon nanotubes in building a nano-mechanical system such as gearbox. The paper presents a conceptual design of a gearbox completely made of carbon nano tubes and discusses its feasibility and realizability. The paper also discusses future directions of research in building nanomachines and nano-mechanical systems using carbon nanotube based machine elements.

Commentary by Dr. Valentin Fuster
2007;():889-896. doi:10.1115/DETC2007-34778.

The development of smaller and smaller micro components and systems is an ongoing process. Effects coming along with this process have to be investigated. A polycrystalline material consists of grains with different orientations. Therefore, a micromechanical model of a polycrystalline material for a Finite Element analysis should consider the grain structure. Studies show that with decreasing size of a micro component its grain structure and material anisotropy gain more and more influence on its stress, strain and flow of forces. In order to ensure a reliable dimensioning of micro components the influence of the grain structure and material’s anisotropy upon the stress and stress distribution has to be investigated. For this purpose, experimental work for characterizing materials’ properties is supplemented by numerical analyses. On the one hand, these analyses allow examining specific influences on the mechanical stress. On the other hand, micromechanical modeling has potential to increase the understanding of material behavior. Methods for modeling two- and three-dimensional micro components with complex grain structures including defects such as pores are presented and compared. The consideration of effects coming along with the grain structure makes a contribution to a reliable dimensioning of micro components with distinct grain structures.

Commentary by Dr. Valentin Fuster
2007;():897-904. doi:10.1115/DETC2007-34861.

This paper presents an initial comparison of two approaches to energy minimization of protein molecules, namely the Molecular Dynamic Simulation and the Kineto-Static Compliance Method. Both methods are well established and are promising contenders to the seemingly insurmountable task of global optimization in the protein molecules potential energy terrain. The Molecular Dynamic Simulation takes the form of Constrained Multibody Dynamics of interconnected rigid bodies, as implemented at the Virtual Reality Application Center from Iowa State University. The Kineto-Static Compliance Method is implemented in the Protofold Computer package developed in the Mechanical Engineering Department at the University of Connecticut. The simulation results of both methods are compared through the trajectory of potential energy, the Root Mean Square Deviation (RMSD) of the alpha carbons, as well as based on visual observations. The preliminary results indicate that both techniques are very effective in converging the protein structure to a state with significantly less potential energy. At present, the converged solutions for the two methods are, however, different from each other and are very likely different from the global minimum potential energy state.

Commentary by Dr. Valentin Fuster
2007;():905-912. doi:10.1115/DETC2007-35209.

Focused Ion Beam (FIB) has been used widely for sample preparation in material research and nanoscale device fabrication. The introduction of FIB system to biological samples preparation, especially for frozen samples, provides the potential to produce delicate submicron geometries on the samples, as well as the potential to be fully digitally controlled. In this paper, we first study the ion interactions with water and different cryoprotectants, and the sputtering yields under different conditions are estimated as the milling rate. A geometric simulation model is also proposed which can be used as a process planning tool to perform cryo-sectioning by FIB. Finally, discussions and suggestions for future work are presented.

Commentary by Dr. Valentin Fuster
2007;():913-920. doi:10.1115/DETC2007-35330.

In previous work, a periodic surface model for computer-aided nano-design (CAND) was developed. This implicit surface model can construct Euclidean and hyperbolic nano geometries parametrically and represent morphologies of particle aggregates and polymers. In this paper, we study the characteristics of degree elevation and reduction based on a generalized periodic surface model. Methods of degree elevation and reduction operations are developed in order to support multi-resolution representation and model exchange.

Commentary by Dr. Valentin Fuster
2007;():921-928. doi:10.1115/DETC2007-35377.

Microfluidic devices exhibit high-aspect ratio in that their channel-widths are much smaller than their overall lengths. High-aspect geometry leads to an unduly large finite element mesh, making the (otherwise popular) finite-element method (FEM) a poor choice for modeling microfluidic devices. An alternate computational strategy is to exploit well-known analytical solutions for fluid flow over the narrow-channels of a device, and then either: (a) assume the same analytical solutions for the (wider) cross-flow regions, or (b) exploit these solutions to set-up artificial boundary conditions over the cross-flow regions. Such simplified models are computationally far superior to FEM, but do not support the generality or flexibility of FEM. In this paper, we propose a third strategy for exploiting the analytical solutions: (c) directly incorporate them into standard FE-based analysis via model reduction techniques. The advantages of the proposed strategy are: (1) designers can use standard CAD/CAE environments to model, analyze and post-process microfluidic simulation, (2) well-established dual-weighted residuals can be used to estimate modeling errors, and (3), if desired, one can eliminate the dependency on possibly inaccurate analytical solutions over selected regions. The simplicity and generality of the proposed method is inherited from the model reduction process, so are its theoretical properties, while simultaneously its computational efficiency is inherited from the use of analytical solutions.

Commentary by Dr. Valentin Fuster
2007;():929-935. doi:10.1115/DETC2007-35445.

This work compares the results of a structural controls based formulation of a micro-cantilever driven by thermal excitation to measured data and weighs the impact of various factors that can affect cantilever measurement. Understanding of the dynamics of small cantilevers such as those used in atomic force microscopy (AFM) is important for many of its applications, especially those that involve observing a cantilever’s thermally driven vibrations. This work considers factors such as the fluctuation dissipation theorem, which places thermodynamic constraints on the spectrum of the thermal driving force, errors associated with photodiode calibration, and cantilever coatings. The structural controls model, which accounts for hydrodynamic loading as a feedback process, is presented and compared to experimental data. Additionally, a discussion of the model’s use for estimating (calibrating) the cantilever stiffness is given.

Commentary by Dr. Valentin Fuster
2007;():937-943. doi:10.1115/DETC2007-35498.

In this paper we formulate and numerically investigate an experimentally based quasi-continuum nonlinear initial-boundary-value problem for the three-field ‘Clapper’ nanoresonator that consistently incorporates the system geometric nonlinearity with nonlinear contributions of both magnetomotive and electrodynamic excitation. The spatio-temporal field equations are then reduced via symmetry and a modal projection to an equivalent quasiperiodically excited, low order, nonlinear dynamical system. The governing parameters of the resulting system are matched with the experimentally measured resonance conditions for small amplitude response. Numerical analysis reveals a complex bifurcation structure of torus doubling culminating with a chaotic strange attractor that exhibits similar features to that previously measured in the ‘Clapper’ experiment.

Commentary by Dr. Valentin Fuster
2007;():945-948. doi:10.1115/DETC2007-35869.

Micro stage employs compliant structure is crucial for precision machinery as it can achieve nano-scale resolution fine displacement by deformation. This paper investigates the variations of stiffness and natural frequency due to dimensional tolerances of such a compliant micro stage that is suspended by four leaf springs and rotates with respect to hinges. Performances of the stage are evaluated by finite element method for various dimensions to investigate the effects of dimensions. A series of sensitivity analyses are also performed to investigate how tolerances affect the performance of the stage. It shows that the stiffness and natural frequency of the stage are strongly affected by the dimensions of leaf springs and the hinges. That is, tolerances of these dimensions are crucial and must be well designed and strictly controlled. It further shows that performance variation due to tolerances are nonlinear but can be properly designed with this approach.

Commentary by Dr. Valentin Fuster
2007;():949-955. doi:10.1115/DETC2007-35908.

MEMS parallel-plate tunable capacitors are widely used in different areas such as tunable filters, resonators and communications (RF) systems for their simple structures, high Q-factors and small sizes. However, these capacitors have relatively low tuning range (50%) and are subjected to highly sensitive and nonlinear capacitance-voltage (C-V) responses. In this paper novel designs are developed which have C-V responses with high linearity and tunability and low sensitivity. The designs use the flexibility of the moving plates. The plate is segmented to provide a controllable flexibility. Segments are connected together at end nodes by torsional springs. Under each node there is a step which limits the vertical movement of that node. An optimization program finds the best set of step heights that provides the highest linearity. Two numerical examples of three-segmented- and six-segmented-plate capacitors verify that the segmentation of moving plate can considerably improve the linearity without decreasing the conventional tunability. A two-segmented-plate capacitor is then designed for standard processes which cannot fabricate steps of different heights. The new design uses a flexible step (spring) under middle node. The simulation of a capacitor with flexible middle step, designed for PolyMUMPs process, demonstrates a C-V response with high tunability and linearity and low sensitivity.

Commentary by Dr. Valentin Fuster
2007;():957-965. doi:10.1115/DETC2007-34624.

Nano-imprinting lithography (NIL) has been developed over 15 years and has shown its great potentials for nanopatterning and nano-fabrication. In this paper, new ideas on improving current nano-imprinting methods have been proposed and preliminary experimental tests are carried out. These proposed nano-imprinting methods are all based on the utilization of pulsed laser sources, either in UV or IR region, and can be easily implemented into a roller-based configuration, which is more effective and much faster than conventional planar type nano-imprinting methods. First of all, based on the Laser Assisted Direct Imprinting (LADI) method proposed in 2002, a modified roller-based LADI method is developed by applying a cylindrical quartz roller for mechanically loading as well as for optically focusing of a deep UV laser beam into a line. This modification not only fulfills a continuous type of LADI process but also more efficiently utilizes the laser energy so that large-area LADI is possible. Experimental testing demonstrates an imprinting rate of 3∼10 cm2 /min. Secondly, a new nano-imprinting lithography based on pulsed infrared laser heating is proposed and demonstrated. It utilizes the partial transparency of silicon crystals at IR spectrum to heat up the photo-resist layer. Possible improvements and applications on this IR-NIL will be addressed. Finally, a new method of direct contact printing and patterning of a thin metal film on silicon substrate based on the idea of nano-imprinting is presented. This method combines the effects of loaded contact pressure and IR pulsed laser heating at the metal-film/substrate interface to form a stronger bonding between them, and therefore complete the direct pattern transferring of metal film on substrate. Good experimental results are observed and possible applications will be discussed.

Topics: Lasers , Rollers
Commentary by Dr. Valentin Fuster
2007;():967-975. doi:10.1115/DETC2007-34722.

This article discusses the current status and achievements of R2R technology for large area nano-scaled optical devices developed at MSL/ITRI. Firstly, a single layer of nanostructure on polymer film is designed for anti-reflection purpose by finite difference time domain (FDTD) method in the visible light spectrum. The conical array with around 1 aspect ratio, like moth-eye shape and showing superior performance in the optical simulation, has been adapted for the R2R experiments. The development of R2R process includes roller machine design and fabrication, roller mold design and making, development of rolling imprint process, characterization of rolled devices. In this study, large area (200mm *200mm) Ni template was fabricated with DUV exposure, followed by dry etching and electroforming process, respectively. Then, the template was bonded on the roller mold with magnetic film to make nanostructure roller mold. With the delicate nanostructure roller mold, systematic experiments have been conducted on the home-made roller machine with various parameters, such as linear speed, dose rate, and material modifications. The duplicated nanostructure films show very good optical quality of anti-reflection (AR < 1%) and are in good agreement with the theoretical predictions. Besides, the duration of the roller mold has been highly promoted to hundreds of imprint in the UV embossing process.

Commentary by Dr. Valentin Fuster
2007;():977-984. doi:10.1115/DETC2007-35164.

Maskless patterning techniques are increasingly implemented in semiconductor research and manufacturing eliminating the need for costly masks or masters. Recent application of these techniques to DNA and cell patterning demonstrates the adaptability of maskless processes. In this paper we present a new lithographic process for dynamically reconfiguring and arbitrarily positioning computer-generated patterns through the use of phase holograms. Similar to current maskless patterning methods this process can achieve pattern transfer through serially tracing an image onto a substrate. The novelty of our process, however, lies in the ability to rapidly fabricate complex micro/nanoscale structures through single-shot exposure of a substrate.

Commentary by Dr. Valentin Fuster
2007;():985-991. doi:10.1115/DETC2007-35527.

Researchers have demonstrated that imprint lithography techniques have remarkable replication resolution and can pattern sub-5nm structures. However, a fully capable lithography approach needs to address several challenges in order to be useful in nano-manufacturing applications. This paper presents the key technical challenges as well as the progress achieved to-date in these areas. A promising nanoimprint technique that has been previously discussed in the literature is a UV curing technique known as Step and Flash Imprint Lithography (S-FIL). In this article, a variant of the S-FIL process — known as drop-on-demand UV nano-imprint process — that addresses many of the key manufacturing challenges is discussed. This process has the ability to address challenges such as process repeatability in residual layer control, low defectivity, ability to fully automate the lithography process, nano-resolution alignment, and the ability to handle pattern density variations. All nano-imprint lithography techniques essentially replicate the patterns present in a master mold (or template). One of the demanding challenges is the creation of this template. Patterning, metrology, inspection, and defect repair issues relevant to template fabrication are discussed. Finally, with a brief discussion of near-term practical applications in the areas of photonics, magnetic storage, and CMOS devices is presented.

Commentary by Dr. Valentin Fuster

9th International Conference on Advanced Vehicle and Tire Technologies

2007;():995-1000. doi:10.1115/DETC2007-34030.

Simplified Finite Element Analysis (FEA) truck tire models are developed and used to examine the interaction between the tire and various types of terrain. Soft terrain such as hard soil and dry sand is modeled using solid, elastic-plastic elements. The general trends of vertical and longitudinal forces and normal and shear stress distributions in the soft soil are compared with published data for preliminary validation. The cornering characteristics on both rigid and soft soil terrains are also predicted and compared. Additionally, a detailed FEA truck tire is introduced as the next phase of this work.

Commentary by Dr. Valentin Fuster
2007;():1001-1008. doi:10.1115/DETC2007-34791.

A methodology aimed at identifying the MF-Tyre model coefficients for the steady-state pure cornering condition is presented in this paper. Only the measurements carried out on board vehicle during standard handling manoeuvres (step-steer) are considered by the identification procedure. The proposed methodology is made of three subsequent steps. During the first phase the axles cornering forces are identified through an extended Kalman filter. Then the vertical loads and the slip angles at each tire are estimated. The results of these two steps are passed as an input to the last phase, during which through a constrained minimization approach, the MF coefficients are identified. The identification procedure has been applied to the experimental data collected on an instrumented sport car.

Topics: Vehicles
Commentary by Dr. Valentin Fuster
2007;():1009-1018. doi:10.1115/DETC2007-35428.

This paper presents a study on the influence of the inertia properties of the vehicles and their subsystems on both the vehicle dynamics and the powertrain vibrational behaviour. Two different mathematical models have been used, one for the entire vehicle and the other for the powertrain. As result of the whole study, an approach for defining the suitable tolerances for the measurement of the inertia properties of the vehicles and their subsystems is proposed in the paper.

Commentary by Dr. Valentin Fuster
2007;():1019-1026. doi:10.1115/DETC2007-35823.

A comprehensive parametric study is carried out on the longitudinal dynamics of a freight train having different loading patterns. A nonlinear time domain model, with one locomotive and nine wagons, is considered. In another simulation the train model has two locomotives and eight wagons, and in both models, every two cars are connected to each other through an automatic coupler. The effects of different load distribution patterns on the coupler forces for the cases of ascending, descending, constant, ascending-descending and descending-ascending are investigated through a parametric sensitivity study. In order to investigate how an empty wagon and its position in a train-consist model may affect the overall longitudinal dynamic behavior of freight trains a second computer simulation model has been developed. Moreover, the best possible position for the second locomotive with the objective of reaching to the lower longitudinal forces, in the case that an additional locomotive is included will be discussed. Finally, an investigation is carried out to determine the kind of couplers with their relevant specifications that must be installed in different positions of a train-consist in order to improve the longitudinal train dynamic behavior.

Commentary by Dr. Valentin Fuster
2007;():1027-1040. doi:10.1115/DETC2007-34062.

Visual inspections of selected semitrailers during routine equipment checks revealed that the kingpin bent in the direction of 180 degrees from the direction that the semitrailer is towed. Confirmation from semitrailer repair facilities found that in some cases the semitrailer’s supporting structure developed unexpected cracks. These cracks were not thought to be age related but were most likely caused by high stresses from unknown high loads. In an effort to determine the forces at the kingpin and fifthwheel, TruckSim® modeling and simulation software was utilized to predict the forces in all three directions during various operating maneuvers. Computer simulations suggest the largest forces are experienced during coupling operations as opposed to severe maneuvering or braking. The development of a Finite Element Analysis (FEA) model of the tractor-semitrailer coupling determined that high coupling speeds would overload the kingpin-fifthwheel structure. The FEA model also allowed researchers to determine that a damping system would lower the forces at the kingpin-fifthwheel interface to the magnitude of forces experienced during normal operations. A literature search found no valid documented tests, and determined the SAE J133 kingpin loading requirements were incorrect.

Topics: Force
Commentary by Dr. Valentin Fuster
2007;():1041-1048. doi:10.1115/DETC2007-34600.

In this study, a new computational approach for parameter identification is proposed based on the application of the polynomial chaos theory. The polynomial chaos method has been shown to be considerably more efficient than Monte Carlo in the simulation of systems with a small number of uncertain parameters. In the new approach presented in this paper, the maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. Direct stochastic collocation is used as a less computationally expensive alternative to the traditional Galerkin approach to propagate the uncertainties through the system in the polynomial chaos framework. The new parameter estimation method is illustrated on a four degree-of-freedom roll plane model of a vehicle in which the vertical stiffnesses of the tires are estimated from periodic observations of the displacements and velocities across the suspensions. The results obtained with this approach are close to the actual values of the parameters even when only measurements with low sampling rates are available. The accuracy of the estimations has been shown to be sensitive to the number of terms used in the polynomial expressions and to the number of collocation points, and thus it may become computationally expensive when a very high accuracy of the results is desired. However, the noise level in the measurements affects the accuracy of the estimations as well. Therefore, it is usually not necessary to use a large number of terms in the polynomial expressions and a very large number of collocation points since the addition of extra precision eventually affects the results less than the effect of the measurement noise. Possible applications of this theory to the field of vehicle dynamics simulations include the estimation of mass, inertia properties, as well as other parameters of interest.

Topics: Chaos , Polynomials
Commentary by Dr. Valentin Fuster
2007;():1049-1059. doi:10.1115/DETC2007-34602.

This work establishes a semi-empirical wheel-soil interaction model, developed in the framework of plasticity theory and equilibrium analysis, to be used in vehicle dynamics simulations. Vehicle-terrain interaction is a complex phenomena governed by soil mechanical behavior and tire deformation. The application of soil load bearing capacity theory is used in this study to determine the tangential and radial stresses on the soil-wheel interface. Using semi-empirical data, the tire deformation geometry is determined to establish the drawbar pull, tractive force, and wheel load. To illustrate the theory developed, two important case studies are presented: a rigid wheel and a flexible tire on deformable terrain; the differences between the two implementations are discussed. The outcome of this work shows promising results which indicate that the modeling methodology presented could form the basis of a three-dimensional off-road tire model. In an off-road three-dimensional tire model, the traction behavior should include shear forces arising from the surface shear with the soil as well as the bulldozing effect during turning maneuvers.

Commentary by Dr. Valentin Fuster
2007;():1061-1065. doi:10.1115/DETC2007-34740.

This paper proposes a method that is based on ANN for monitoring of the vehicle behavior. Considering the control loop of driver-vehicle-environment a driver should perceive the environment and the vehicle behavior by processing received information from the environment and feedback from the vehicle. The precession of the driver’s percipience is the critical element in such case. In this study, an ANN is applied for perception and prediction of the vehicle dynamic performance. Several relevant parameters from the vehicle and the environment, such as accelerator pedal travel and road grade, serve as information for the prediction. After training of the network with the measured data from a test vehicle, the network will be used for prediction of the driving speed. The comparison of the measured driving speed with the predicted speed can indicate the actual performance of the vehicle, see Figure 1.

Commentary by Dr. Valentin Fuster
2007;():1067-1074. doi:10.1115/DETC2007-34753.

A Multi-Body vehicle model aimed at reproducing the vertical dynamics of a light-duty commercial vehicle is presented in this paper. In order to properly model the vehicle and to identify some unknown structural parameters, experimental tests have been carried on a four-post test rig and on ordinary roads.

Commentary by Dr. Valentin Fuster
2007;():1075-1081. doi:10.1115/DETC2007-34190.

Aiming at understanding the structural integrity of a concentrating photovoltaic (CPV) module configuration, Finite Element (FE) thermal stress analysis is carried out in this investigation. Nonlinear viscoplastic analysis using the temperature profile of CPV cell fatigue test, is performed to evaluate the structure strength and subsequently predict the life of a CPV module. The result reveals that the maximum characteristic stresses of the PV cell components and heat sink are below the strength allowable for the corresponding materials under both the steady-state and over-night idle conditions. Critical locations on the solder that are potentially susceptible to structural failure after a few thousand thermal cycles due to excessive shear stress are identified. A rough estimation of the module life is provided and compared with the fatigue test. This investigation provides firsthand understanding of the structural integrity of CPV modules and is thus beneficial for the solar energy community.

Commentary by Dr. Valentin Fuster
2007;():1083-1090. doi:10.1115/DETC2007-34271.

This paper presents gain scheduling of control strategy for parallel hybrid electric vehicles based on the traffic condition. Electric assist control strategy (EACS) is employed with different parameters for different traffic conditions. The parameters of the EACS are optimized and scheduled for different traffic conditions of TEH-CAR driving cycle. TEH-CAR is a driving cycle which is developed based on the experimental data collected from the real traffic condition in the city of Tehran. The objective of the optimization is to minimize the fuel consumption and emissions over the driving cycle, while enhancing or maintaining the driving performance characteristics of the vehicle. Genetic algorithm (GA) is used to solve the optimization problem and the constraints are handled by using penalty functions. The results from the computer simulation show the effectiveness of the approach and reduction in fuel consumption and emissions, while ensuring that the vehicle performance is not sacrificed.

Commentary by Dr. Valentin Fuster
2007;():1091-1112. doi:10.1115/DETC2007-35541.

Hybrid vehicle technology is beginning to make a significant mark in the automotive industry, most notably by the Toyota Prius THS-II and its one-mode technology, but also by two-mode architectures recently introduced. GM-Allison, Renault, and the Timken Company have attempted to capitalize on the advantages over simpler series and parallel architectures that the series-parallel configuration confers on the Prius while also improving the design by allowing the powertrain configuration to physically shift and operate in two different modes depending on the driving load. This work provides an overview of the state-of-the-art in two-mode hybrid vehicle architectures, and demonstrates the performance of this technology in comparison to the market-leading Toyota Prius one-mode hybrid vehicle technology and conventional ICE technology. Simulations in the NREL ADVISOR® software compare the performances of the one- and two-mode architectures against a parallel-full design and the ICE baseline for four different drive cycles and a vehicle with varying weight that simulates a commercial vehicle application. A configuration that is a variation of those designed by GM-Allison was chosen as the representative of the two-mode architectures. The performance metric was fuel economy. The fuel economy was measured over the course of the drive cycles: (1) Urban Dynamometer Driving Schedule for Heavy Duty Vehicles (UDDSHDV); (2) New York City Truck (NYCT); (3) City-Suburban Heavy Vehicle Route (CSHVR); and (4) Highway Fuel Economy Test (HWFET). The vehicle model uses a module developed in-house for a Kenworth T400 truck with a payload that varies from empty to completely full. The results demonstrate that the two-mode architecture provides significantly improved performance to that of the conventional non-hybrid design and comparable performance to that of the parallel-full hybrid design. Furthermore, the one-mode design is shown to be sub-optimal for this vehicle type. Development and optimization of the control strategy, which is the direction of the current research, should allow for additional improvement in fuel economy; optimization of vehicular components could result in improvements in acceleration ability, gradeability, and top speed performance, which lags behind the performance capabilities of the conventional powertrain vehicle in these metrics. The study confirms that two-mode architecture presents unique advantages for constantly changing driving cycles and vehicle payloads and represents the future of hybrid vehicle technology.

Commentary by Dr. Valentin Fuster
2007;():1113-1124. doi:10.1115/DETC2007-35555.

Battery powered electric bicycles and scooters, replacing the heavily polluting scooters with two-cycle internal combustion engines, provide zero-emission transportation for many parts of the world. Annual global sales of electric bicycles have risen from 36,000 in 1993 to over 500,000 in 1999 and to multi-millions today. To facilitate the development of new electric bicycles, a computerized electric bicycle testing facility has been created. Standardized testing cycles for quantitatively measuring the performance of electric bicycles have been developed. Testing results of three representative electric bicycles using the newly introduced electric bicycle testing methods and testing facility are presented. The development of a low-cost, fully Adaptable Electric Bicycle Power System (AEBPS) designed to be quickly adapted to a regular bicycle is also presented. The AEBPS can be attached to a regular bicycle in less than ten minutes, and removed in under five minutes. Performance of a converted bicycle using the AEBPS is evaluated and compared with representative commercial electric bicycles. The work forms the foundation for systematically evaluating different electric bicycle designs and for carrying out design optimization of electric bicycle power systems suitable to different markets and needs.

Commentary by Dr. Valentin Fuster
2007;():1125-1139. doi:10.1115/DETC2007-35251.

The paper presents an innovative dummy conceived to provide an effective tool for an objective vehicle ride comfort evaluation. The first part of the research includes experimental tests on instrumented seats for evaluating the vertical (cushion) and longitudinal (backrest) acceleration between the vehicle seat and the seated human subject. Experiments have been performed by using a vibrating table fitted with a vehicle seat and by seating directly the subjects on vehicles (cars and light trucks) while running on a test track. The test track includes uneven road and different obstacles. Human subjects have been chosen in order to obtain a high variability in the anthropometric features (height, weight, gender, age). Several test have been also performed with the same subject submitted to the same excitation in order to investigate the inter-subject variability and the intra-subject variability. During the study different seats have been compared. From the acquired data, a mathematical model of the system, human subject + seat has been derived and numerically validated by minimizing the error between the measured and the computed accelerations. The corresponding mechanical device has been built, the MaRiCO dummy. The device is fully adjustable in order to simulate the vibrational behaviour of different human subjects. Particular attention has been devoted to the construction of the springs and of the magnetic damper to reduce as much as possible the friction between the moving components. The dummy rests on the seat by means of special elements that, thanks to their compliance and conformation, act as the tight and the back of human beings. An experimental validation of the dummy has been performed. The device, opportunely tuned and seated with the same posture of the corresponding human subject is able to reproduce the acceleration between the subject and both the cushion and the backrest.

Commentary by Dr. Valentin Fuster
2007;():1141-1149. doi:10.1115/DETC2007-35515.

In this paper a new concept of semitrailer for hydrogen carriers is presented. It has been developed by means of numerical calculations carried out with the finite elements method. This new design of semitrailer incorporates several new functions. The first one is to achieve a rollover resistant vehicle. It has to be taken into account the absence of a specific regulation in Europe for this type of vehicles, so former vehicles were not enough rollover resistant. Nevertheless, in this development process it has been taken into account the European rollover regulation concerning large passenger vehicles “Regulation n° 66 of Geneva”, applied in this case to a vehicle having a much higher mass and a higher centre of gravity than a bus. Therefore a higher kinetic energy will be obtained in case of rollover. The second characteristic inherent to this new design of semitrailer is its lightness. It has achieved a mass reduction of 8500 kg in comparison with former designs, by means of lightening the structure, in charge of supporting the hydrogen cylinders, as well as making lighter these hydrogen cylinders themselves. Numerical models developed have been calculated by explicit integration of the dynamic balance equation. Very complex finite element models have been developed in order to include all geometric details. In these models, the elastic-plastic curves of involved materials have been included as well as its variation due to the strain rate influence. There have also been taken into account non-linear effects from contacts and from large strains that take place.

Topics: Hydrogen
Commentary by Dr. Valentin Fuster
2007;():1151-1159. doi:10.1115/DETC2007-35576.

In this paper, a new concept of rear bumper for semi trailer has been developed by means of numerical simulation. A semi trailer bumper is a substructure that has a great importance in the whole vehicle. It reduces the effect of a crash impact against another vehicle in the rear part of the semi trailer. Thus, both the vehicle and passengers are protected from impact effects. The design of this kind of structures must show suitable behavior depending upon crash conditions and structure conditions (bumper material, load cases, boundary conditions, impact velocity[[ellipsis]]). In this study case, 79/490/CEE Directive shows the specific conditions that must be complied by a rear protective device to be approved by an official certifying body. The bumper design has been carried out by means of numerical simulation tools based on the Finite Element Method (FEM). This method provides results in terms of strain and stress of the analyzed structures, subjected to several load cases, boundary conditions, and considering several materials. A new light aluminum bumper structure fulfilling the previous directive is obtained. Finally, a light bumper prototype has been developed based on the previous numerical results and it has been tested according to the homologation conditions specified in 79/490/CEE Directive. Nowadays, Lecitrailer S.A., a Spanish semi trailers manufacturer, has incorporated this new bumper structure to their vehicles.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():1161-1168. doi:10.1115/DETC2007-35659.

In this paper, new advances in simulation of car frontal crash structure are presented. Experimental results have been used to calibrate constitutive material models and simulation procedures to obtain more accurate representation of material behavior under crash loading. These advanced computational techniques are then applied to the crash simulations of components. The front longitudinal beam studied in this paper is analyzed for frontal crash. Qualitative results on specific energy absorption, as well as the absolute energy absorbed by the structure, are especially relevant. Three designs are then proposed as potential solutions for the front side rail design. Through simulation of their impact into a wall the designs are compared on the basis of crush force and specific energy absorption and a preferred design is chosen.

Topics: Design
Commentary by Dr. Valentin Fuster
2007;():1169-1178. doi:10.1115/DETC2007-34041.

This paper presents an application of one model based adaptive mechanism for skyhook controls, which is developed from the gradient search method that can be found out from LMS (least mean square) adaptive filters and other optimization routines. Since plant and operation variations could worsen vibration isolation performance of the traditionally-implemented skyhook controls, the rationale to embed adaptive mechanism into skyhook controls is to achieve more robust and consistent vibration control performance. The adaptation introduced here can help to find out optimal skyhook gains under broadband random vibrations. In this paper, the formulation of this model based adaptive algorithm is derived from a one-DOF base excited vibration system with application of magneto-rheological (MR) dampers. A nonparametric model is adopted to represent the applied MR damper. Based on the dynamic model derived for this studied MR suspension, the algorithm is elaborated. Furthermore, the adaptive semiactive system stability is briefly discussed. Finally the simulation study demonstrates the effectiveness of the adaptive skyhook controls.

Commentary by Dr. Valentin Fuster
2007;():1179-1183. doi:10.1115/DETC2007-34336.

The dynamics phenomena of half railway carriage model with 17 degrees of freedom are investigated. The model is set to run on a straight track with the speed range between 85 and 165m/s. Shen-Hedrick-Elkins theory is used to describe the nonlinear relation between the creepage and the creep forces. A new method of flange contact as multi-body collision is also presented. Bifurcation diagram and the critical speed for jumping rail are obtained and further validated by the simulation. Finally, the phase plots of the flange contact are discussed.

Commentary by Dr. Valentin Fuster
2007;():1185-1192. doi:10.1115/DETC2007-34534.

This paper presents the vehicle stability improvement by active front steering (AFS) control. Firstly, a mathematical model of the steering system incorporating vehicle dynamics is analyzed based on the structure of the AFS system. Then feedback controller with linear quadratic regulator (LQR) optimization is proposed. In the controller, the assisted motor in the system is controlled by the combination of feedforward method and feedback method. And the feedback parameter is the yaw rate together with the sideslip angle. Due to the difficulties associated with the sideslip angle measurement of vehicle, a state observer is designed to provide real time estimation to meet the demands of feedback. In the last, the system is simulated in MATLAB. The results show that the vehicle handling stability is improved with the AFS control, and the effectiveness of the control system is demonstrated.

Topics: Stability , Vehicles
Commentary by Dr. Valentin Fuster
2007;():1193-1199. doi:10.1115/DETC2007-34564.

By observing the lateral vehicle dynamics and in particular the sideslip angle, the detection of critical driving situations is possible. Thus, an adaptive observer for sideslip angle estimation is proposed in this paper. According to the proposed methodology, the sideslip angle is estimated as a weighted mean of the results provided by a kinematic formula and the ones obtained using a state observer based on a linear single-track vehicle model; tires cornering stiffness are updated during the transitory phase of a maneuver in order to take into account nonlinearities and changing of adherence conditions between tires and road.

Commentary by Dr. Valentin Fuster
2007;():1201-1210. doi:10.1115/DETC2007-34603.

Magnetorheolgical (MR) fluid dampers have the capability of changing their effective damping force depending on the current input to the damper. A number of factors in the construction of the damper, as well as the properties of the fluid and the electromagnet, create a dynamic response of the damper that cannot be fully described with a static model dependant on current and velocity. In this paper, a static deterministic model for the force response of a damper will be compared to a time dependant dynamic model. Data collection was performed by providing a normally distributed random signal for velocity and a uniformly distributed random signal for current to Lord rheonetic seat damper; the signal distributions were chosen so dynamic effects were not artificially reduced. The parameters for the static and dynamic models were found by minimizing the error in force output. The error of the models is then compared, as well as the probability distribution for both models and the original data.

Commentary by Dr. Valentin Fuster
2007;():1211-1222. doi:10.1115/DETC2007-34679.

The accuracy of the recent vehicle dynamics simulation technology, represented by Multi-Body Simulations along with reliable tire models, has been remarkably progressing and provides reasonable simulation results not only for conventional passive vehicles but also for advanced active vehicles equipped with electronic components; however, when it comes to advanced vehicle applications with complex active systems, the complexity causes a longer simulation time. On the other hand, even though simple numerical vehicle simulation models such as single-track, two-track and a dozen degrees of freedom (dofs) models can provide less information than those of multi-body models, they are still appreciated by specific applications particularly the ones related to the development of active systems. The advantages of these numerical simulation models lie in the simulation platform, namely the Matlab/Simulink environment, which is suitable for modeling electronic components. In this paper, an 18 dofs vehicle model has been proposed for the development of a type of active suspension named Variable Camber which has an additional degree of freedom in camber angle direction and a description of the models and some preliminary results are reported: the control strategy for the variable camber suspension will be published ([3]). The model can reproduce a passive vehicle with a passive suspension as well; all the necessary dimensions, parameters, and physical properties are derived from a specific multi-body full vehicle model which has been fully validated with respect to a real one on the track. As for a tire model, Magic Formula 5.2 has been implemented on both the numerical and the multi-body vehicle models respectively so that the same tire model can be applied.

Topics: Vehicles
Commentary by Dr. Valentin Fuster
2007;():1223-1232. doi:10.1115/DETC2007-34806.

The most common automotive drivelines transmit the engine torque to the driven axle through the differential. Semi-active versions of such device ([10], [11], [12]) have been recently conceived to improve vehicle handling at limit and in particular maneuvers. All these differentials are based on the same structural hypothesis of the passive one but they try to manipulate the vehicle dynamics controlling a quantity which was fixed in the passive mechanisms. In this way it’s possible to control the amount of the stabilizing torque but it’s not possible to apply it in both directions. This fact is a great draw drawback of the semi-active differential because a complete yaw control can’t be developed. On the other hand, active differentials [17] can both apply the best yaw moment (in terms of amplitude) and do this with the right sign. Although classic active differentials are greatly versatile, they can’t (or hardly can) reproduce an extreme torque distribution as 0–100% when there is not a μ-split condition. That is because there is always a bias value due to the presence of a gear that has to be decreased by active clutch action. And these clutches are often not able to do that. The most innovative device presented in the last years is the Super Handling-All Wheel Drive (SH-AWD) by Honda ([2], [3], [4], [5]). It can freely distribute the drive torque to the desired wheel, maintaining one of them in free rolling condition, if this is necessary. This flexibility in the lateral torque distribution can hugely increase the vehicle manoeuvrability. Author has carried out a feasibility study to evaluate the handling improvement due to such a device on a high performance rear wheel drive vehicle normally equipped with a semi-active differential.

Topics: Torque , Vehicles
Commentary by Dr. Valentin Fuster
2007;():1233-1238. doi:10.1115/DETC2007-34932.

The paper deals with the pavement damage potential of long combination vehicles equipped with tandem C-dolly. Due to its double-hitch bar design, the tandem C-dolly enhances the lateral stability of the articulated vehicles; however, its single articulation could create significant levels of longitudinal load transfer between the coupled bodies and posing higher pavement damage. The pavement damage potentials of vehicles equipped with tandem C-dolly is compared with that of vehicles equipped with standard tandem A-dolly. For pavements with a roughness lower than 2 mm/m the effect of tandem design is only marginal. For pavements with roughness greater than 2 mm/m, results suggest an optimum speed (95 km/h) at which C-dolly is 1% less damaging than A-dolly. However, for the other two speeds evaluated (75 and 115 km/h), C-dolly is up to 12.1% less friendly than A-dolly. Such increases in road damaging potential are significant and an experimental study should be performed to validate results.

Topics: Modeling , Roads
Commentary by Dr. Valentin Fuster
2007;():1239-1245. doi:10.1115/DETC2007-35144.

This paper discusses the development of an improved design for a tire-coupled quarter-vehicle testing rig. The use of indoor-based simulation tools has become a mainstay in vehicle testing for the automotive and motorsports industries. Testing on a quarter-vehicle rig provides a cost effective means for making accurate and repeatable measurements that enables the user to perform a relatively large number of tests in a short amount of time. A review of current quarter-vehicle test platforms, both commercially available and in academic research labs, indicated that many desired functional requirements were not available. The goal of this effort was to develop a new quarter-vehicle rig with expanded capabilities that are not simultaneously present in the current state-of-the-art. The desired functional requirements are: accommodation of a wide range of actual vehicle suspension components including the tire and wheel, weight transfer due to braking and acceleration, aerodynamic forces, and vehicle roll. The test rig was constructed and tested using a Porsche 996 suspension. The suspension dynamics were characterized by fitting the parameters of a linear dynamic model to experimental response data from the rig. The design and performance of this new quarter-vehicle test rig is shown to be a cost effective solution for meeting the broad range of functional requirements.

Topics: Vehicles
Commentary by Dr. Valentin Fuster
2007;():1247-1252. doi:10.1115/DETC2007-35153.

This paper presents the design and validation of an experimental test rig for direct visual and analytic comparison of fully active and semi-active suspension control algorithms using electromagnetic actuation. A linear mathematical model simulation of the test rig is presented, as well as experimental validation test results comparing passive against fully active and semi-active skyhook control algorithms. A variety of fully active and semi-active vibration control methods have been developed for primary suspensions. Our goal is to provide a development platform in which new algorithms can easily be implemented, in a cost effective manner on a physical system, and compared against existing algorithms to determine the performance characteristics of each. This platform will provide a standard of evaluation in which multiple control algorithms can be tested, and will help to simplify the design process.

Commentary by Dr. Valentin Fuster
2007;():1253-1260. doi:10.1115/DETC2007-35405.

The main active suspension devices (Citroën Hydractive, Mercedes ABC, BMW Dynamic Drive, CDC systems in general, etc.) produced in the last years in the automotive industry try to manage the stiffness at the ground and finally the handling, controlling the spring and/or the antiroll bar stiffness. From a theoretical point of view such a solution is not the only one to control the tire vertical force. If we consider the vehicle dynamics equations obtained with the lagrangian approach, the tire vertical force is the product of the physic element characteristic (spring, damper and antiroll bar) times a purely kinematics term, called jacobian. Authors imagined to have an innovative device, constituted of a variable kinematics suspension, in which the spring and the damper are passive and the geometry is active. In this way it’s possible to control the suspension effects at the ground modifying the jacobian and not directly the stiffness and/or the damping values. While in a passive suspension the vertical stiffness at the ground is a function mainly of the hardpoints position and the vehicle height is a function of the spring preload, in a variable kinematics suspension both the stiffness at the ground and the vehicle height become function of the kinematics actuator too. This further dependence allows to control the tire vertical force, the car static height and finally the overall handling. A stiffness variation of ± 50% and a height variation of ± 20 mm become a reachable target so that an antiroll bar is not needed anymore, reducing the vehicle mass. Besides, the height control could be useful for all those car that have a well designed and sophisticated aerodynamics (i.e. with a flat undertray to increase the ground effect). Because of the jacobian dependence on the geometry, the suspension hardpoints location has been optimized to match the all targets as better as possible. Authors developed a multi-body model in Matlab/SimMechanics and they linked it to a genetic algorithm in order to find the best kinematical configuration. After this development step, a deep analysis of the potentialities of this kind of suspension has been performed comparing the handling behavior in ISO maneuvers to the one of the same vehicle equipped with a standard passive suspension.

Commentary by Dr. Valentin Fuster
2007;():1261-1270. doi:10.1115/DETC2007-35658.

This paper extends the results for active suspensions obtained by Chalasani in 1986, by evaluating the potential of semiactive suspensions for improving ride performance of passenger vehicles. Numerical simulations are performed on a seven-degree-of-freedom full vehicle model in order to confirm the general trends found for a quarter-car model, used by the authors in an earlier study. This full car model is used not only to study the heave, but also the pitch and roll motions of the vehicle for periodic and discrete road inputs. The behavior of a semi-actively suspended vehicle is evaluated using the hybrid control policy, and compared to the behavior of a passively-suspended vehicle. The results of this study obtained with the periodic inputs indicate that the motion of the quarter-car model is not only a good approximation of the heave motion of a full-vehicle model, but also of the pitch and roll motions since both are very similar to the heave motion. The results obtained with the discrete road input show that, for the example used in this study, the hybrid configuration clearly yields better results than the passive configuration when the objective is to minimize different deflections, angles, and accelerations at the same time.

Commentary by Dr. Valentin Fuster
2007;():1271-1277. doi:10.1115/DETC2007-35689.

An analytical study that evaluates the response characteristics of a two-degree-of freedom quarter-car model using passive and semi-active dampers is provided as an extension to the results published by Chalasani for active suspensions. The behavior of a semi-actively suspended vehicle is evaluated using the hybrid control policy, and compared to the behavior of a passively suspended vehicle. The relationship between vibration isolation, suspension deflection, and road-holding is studied for the quarter-car model. Three performance indices are used as a measure of vibration isolation (which can be seen as a comfort index), suspension travel requirements, and road-holding quality. These indices are based on the mean square responses to a white noise velocity input for three motion variables: the vertical acceleration of the sprung mass, the deflection of the suspension, and the deflection of the tire, respectively. The results of this study indicate that the hybrid control policy yields better comfort than a passive suspension, without reducing the road-holding quality or increasing the suspension displacement for typical passenger cars.

Commentary by Dr. Valentin Fuster
2007;():1279-1286. doi:10.1115/DETC2007-35809.

The longitudinal dynamics of railway vehicles is studied in this paper. The model is a time domain and nonlinear one. Model includes a train-consist containing one locomotive, nine wagons and nine automatic couplers between them. The effects of different parameters, (such as stiffness and damping of automatic couplers, train speed and train acceleration during both accelerating and braking process) on the longitudinal train dynamics are investigated in a parametric study. It is found that increase of the train operational speed has no effect on the maximum tractive forces while it results in increasing the maximum pressing forces as well as the RMS (Root mean square) value of the coupler forces. Higher acceleration during accelerating leads to higher maximum tractive and pressings forces and also higher RMS value of the coupler forces. Although increase of acceleration during braking results in higher pressing coupler forces, but it has no effect on the maximum tractive force.

Commentary by Dr. Valentin Fuster

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In